A+ A A-
Top backup tips for SMEs – protect your valuable data from load shedding problems

Although the South African power crisis is being addressed, the local market can still expect the possibility of load shedding, creating a challenge for businesses to say the least. Aside from the disruptions that these outages can cause, there is also the possible resultant damage to sensitive electronic equipment, such as hard disk drives and servers, which can cause data corruption and loss. This can pose serious problems, particularly in the Small to Medium Enterprise (SME) market, where the ability to recover quickly from data issues is critical to the sustainability of the business. In light of this, backups are now more important than ever, and with advances in technology, there are backup solutions that make financial sense to the SME.

 

Having an Uninterrupted Power Supply (UPS) installed is only part of the solution, enabling a stateful shutdown of equipment. Backups form the other part of a successful load shedding survival strategy, ensuring that data is always stored somewhere safe and can be easily restored in case of a problem. These five backup tips for SMEs will help smaller businesses protect themselves from data loss and the resulting consequences to business that this can cause.

 

1. Use built-in backup solutions

Many Operating Systems (OS) feature standard backup software as part of the package, and these solutions are proven to work. An example of this is Apple Time Machine, a standard backup solution on all Mac computers. They are also an affordable option for the SME market, and do not require much in the way of investment. Purchasing an inexpensive external hard drive to perform backups to will also provide a level of protection. However, this does not necessarily solve the problem of power outages, as external drives often require an external power source. This is where a UPS comes in, as it can be connected to the external drive to ensure it can be shut down safely in the event of an outage.

 

2. Ensure rotational backup systems are in place

Having one backup these days is simply not sufficient, because of the critical nature of business data and the consequences of losing this valuable information. It is therefore essential to ensure that you have more than one backup hard drive and that these drives are rotated regularly, minimising the risk should one of the hard drives fail. External hard drives are fairly inexpensive, and the initial outlay of purchasing an additional hard drive is minimal compared to the cost should the sole backup drive fail.

 

3. Keep your backup off-site (or in the cloud)

Rotational backups enable one drive to be kept on premises and another one to be stored offsite. The saying ‘don’t keep all of your eggs in one basket’ is applicable here – there is no point having three copies of your data if they are all stored in one place, as should the office be burgled, or be subject to fire, flooding or other disaster, all copies of the data are likely to be lost. When using rotational backups, ensure one drive is always locked away safely at another site. Another option is to use cloud-based backup solutions. Many of these are available for free, such as Dropbox and Google Drive, which offer a limited amount of storage space, with additional space available for a nominal fee. Once a cloud backup has been conducted, it can then be incrementally updated, saving bandwidth costs. Cloud backups ensure that no matter what happens on site, a copy of data is always stored somewhere else, and can be easily accessed from anywhere using a web portal.

 

4. Ensure you can do a bare metal restore

Specifically in server environments, it is not only important files and folders that are stored, but also applications, settings and configurations specific to the OS environment. Should the server crash, getting back up and running quickly relies on the ability to do what is known as a ‘bare metal restore’, which allows you to restore an entire computer system. Using bare metal restore capabilities, the backed up data includes the necessary OS applications and data components to rebuild or restore the backed up system to an entirely separate piece of hardware. This ensures that you can get back up and running on new hardware, with a server that is in the original state as at the last backup.

 

5. Test your backups

Having a backup in place is all very well, but if this backup cannot be used to successfully restore data, it is practically useless. It is therefore important to understand how your backup works, so that you can test to see whether it is being performed correctly and that your data can be restored should this be necessary. One way of doing this is to schedule a routine recovery on the system to make sure that all data is being backed up. Many of today’s backup solutions come with a wizard-drive file recovery for data restoration, however, often restoration is a more complex process, and it is recommended to use your IT partner to assist with advanced system recovery.

 

Protect your business and your data

Time and time again, it has been proven that backups save businesses. The increased risk of load shedding in the coming months highlights this need. When power is suddenly cut, hard drives do not shut down properly, and there is a high risk of the disk crashing and losing or corrupting data. Following these five backup tips will ensure that, in the event of drive failure, data can be easily restored with minimal disruption to business. Aside from the risk of load shedding, having adequate backup is also sound business practice, as disk failure is a common occurrence. No business can afford to be without their data, and therefore no business can afford not to have adequate data backup.

Published in Storage & Data Centres
Tuesday, 22 October 2013 12:05

The changing role of resellers

The changing role of resellers

Educating customers on backup is critical for data protection

Data has become the lifeblood of any organisation, and with an increasing shift away from hardware towards a more service-oriented market, the role of the reseller when it comes to backup has changed dramatically.  A ‘box drop’ approach is no longer sufficient, given the critical nature of data. Resellers of data protection solutions now have a responsibility to their customers to educate them on backup solutions and practices and ensure business continuity by making certain that their customers can recover effectively in the event of a data issue.

 

While most large organisations have realised the critical nature of effective backup and recovery solutions, the Small to Medium Business (SMB) market still relies heavily on memory sticks, external hard drives and other ad-hoc backup processes, if they have any such processes in place at all. However, these backups are often not regularly checked, and only when a data issue occurs and a restore is necessary do the problems with this method become evident. In addition, when backups are recovered from such devices, it is usually difficult or impossible to recover just the missing data, and only the data from the last backup can be restored. This usually results in work since the last backup being lost.

 

Irregular or infrequent backups often go hand-in–hand with users not checking their backed-up data for integrity and the ability to recover it as and when necessary, which can cripple the business in the event of data not being recoverable. Without this critical data, many SMB organisations simply cannot recover, leading to lost income and even the closure of the business itself. With approximately 97% of all data restores necessitated due to hardware failure, hard drive malfunctions or data corruption, the need for end users from all sizes of business to move to automated backup environments is clear.

 

Resellers of these solutions are in a favourable position to educate their end user customers on the benefits of automated backup and the repercussions of not having a plan or process in place. Many businesses, particularly in the SMB space, do not have the expertise or capacity to adequately manage backup and recovery on their own. Added to this, the research required to find a solution that is ‘fit for purpose’ has proved onerous in the past, leading to poor backup practices that can cause problems further down the line.

 

As providers of backup solutions, these resellers understand the market, the challenges, and the needs of their customers, and are also able to offer a managed service that delivers more comprehensive backup and recovery. With the evolution of technology, there is also a wider range of solutions on offer to deliver fast, efficient and above all automated backup to protect vital data.

 

There are now a host of best-of-breed solutions available for businesses of all sizes, addressing backup from the level of individual PCs right up to servers and entire data centres. The growth of the cloud, and increased trust in cloud solutions, has also provided another avenue for resellers to offer remote backup solutions, which store data securely offsite in the cloud, meeting best practice guidelines and ensuring always-available data recovery.

 

In order to take advantage of new opportunities and provide better customer services, resellers need to make the leap from selling products to providing solutions and services that deliver value to their customers. The onus is now on resellers to take this proactive step, do their research and find the right products, including cloud or hosted platforms, to adopt and sell on to their customers. This not only opens up new revenue streams, but delivers immense satisfaction in knowing that customers’ data is secure and properly backed up. Resellers have the opportunity to become trusted partners and reinforce relationships, strengthening their own business while helping their customers at the same time.

 

By taking on this new role, not only are resellers able to take on a more strategic position in an IT world dominated by the cloud, they are also able to benefit from improved credibility and annuity revenue that results from selling solutions and advice rather than simply products.

Published in Storage & Data Centres
Thursday, 20 June 2013 11:00

The Shift to ‘Disposable’ IT

The Shift to ‘Disposable’ IT

With the prices of our most treasured gadgets falling every year, coupled with the increasing sophistication of ‘cloud’ solutions, IT is fast becoming highly ‘disposable.’ Essentially, consumers become more attached to the content that their hardware houses – more so than the actual hardware – so replacing technology when it becomes outdated (or is rendered unusable, for some reason) is becoming less painful – provided that the content is securely stored and backed up somewhere.

 

Indeed, in recent years, we have noticed a significant shift in our customers’ approach to technology. In the past, customers used to look at their PC as an asset that required maintenance to conserve its value and that could be upgraded – now, upgrades are hardly considered because the cost of labour and parts to upgrade a PC is often greater than the cost of a better, newer model.

 

For example, with regards to popular electronics like DVD players, you can buy a basic player for around R400. Should it break, however, you would need to look for a repair shop, wait for repairs, retrieve it and perhaps spend R400 for the repair (and have an old DVD player as opposed to getting a brand new one delivered to your door for the same price).

 

Another important aspect to consider is how cumbersome the process of transferring your data to a new machine used to be. This acted as a deterrent against buying new tech, and encouraged people to repair what they had…yet these days, if you know how to go about it, it is very easy to be up and running with a new computer/tablet/smartphone if your data is in the cloud.

 

The gadgets people use are also important status symbols: no one is interested in your old, upgraded PC, but you can definitely show off the latest smartphone and do almost exactly the same things you can do with a PC (view photos, open documents, send and receive emails, etc).

 

Up in the Cloud

The technology that is driving the shift to ‘disposable IT’ is the widespread move to the cloud; i.e. to storing and backing up information on outside servers. There are numerous platforms that have sprung up to make the move to the cloud as smooth as possible. Evernote is a fantastic, free tool to keep your notes, web clips or even voice memos synchronized across all your devices. The new Outlook 365 subscription, as well as Gmail or icloud are all great ways to access your email from anywhere and make sure it is never lost. In addition, you can store documents and pictures on sites like dropbox.com where you have a certain amount of storage space available for free (if you need more you can purchase subscriptions). A similar service is provided by - and both platforms will keep your files safe and accessible from most devices.

 

In the business environment, SharePoint and Office 365 have enabled many small businesses to make do without costly hardware like servers that require maintenance and regular upgrades. Coupled with a reliable support company that ensures that your network is safe and functional, the cloud can take care of most of your IT requirements at an affordable price.

 

A Word of Advice…
I would definitely encourage people to make use of the cloud. I personally don’t need to back up anything anymore. Even if a big fire destroys all my gadgets (and I have quite a few) all I need to do is get a new one, log in (to where my info is stored) and get all my data back… however, don’t forget to be safe on the Internet and run antivirus software. The danger of synchronization across devices is that if one file becomes corrupted/infected, it can easily spread across all your devices – so always be cautious.

 

Defining ‘the Cloud’

At its root, cloud computing is a service that provides IT solutions via the Internet. The "cloud" provides services that vary in capability - from basic e-mail service to enterprise software applications such as customer relationship management (CRM), and of course, storage. The main objective of cloud computing is to provide companies and consumers with an accessible and seamless platform to build and host applications and keep their data secure.

Published in Hardware
Friday, 18 January 2013 10:30

Ten reasons to move your backup to the cloud

Ten reasons to move your backup to the cloud

The data explosion is increasing demand for data storage, driving up costs, amplifying the risk of data loss or exposure and complicating disaster recovery plans and strategies. Furthermore, organisations are increasingly moving away from cumbersome, error-prone, tape-based backup solutions. As a result of these factors, cloud-based data protection and backup solutions are becoming increasingly attractive.

Published in Storage & Data Centres
Read more...
Thursday, 29 November 2012 10:15

Prevent database disasters with a simple checklist

Prevent database disasters with a simple checklist

In today’s information-driven age, the database is the heart of any organisation. From running applications to processing transactions and storing customer and other mission critical data, without the database businesses simply cannot function. Despite the critical nature of the database, many companies do not have a comprehensive backup and disaster recovery strategy in place and resort to crisis management when their database crashes, often resulting in costly downtime. 

 

There are a few checklist items to consider with backup and disaster recovery, ensuring minimal disruption and most importantly, continuity for the business.

Checklist Item #1: The backup and disaster recovery strategy

Whether organisations run a full disaster recovery environment or simply conduct regular backups, having a plan and processes in place to govern this in the event of an emergency can literally save a business.

 

A backup and disaster recovery strategy is therefore essential for every modern business of any size. This is the most important step in ensuring your database is not a disaster waiting to happen.

 

In order to develop this strategy, organisations firstly need to understand how critical their data is to the business. Not all organisations require a full disaster recovery environment, as these can be costly to implement. Furthermore, not all data is mission critical or will cause the business to fail if lost or takes time to recover. However, at the very least, all data needs to be maintained in some form of working backup environment and these backups need to be conducted in line with business rules.  Business rules govern the backup and recovery strategy, and outline how data should be stored and restored, as well as guide the times required for a restore to take place and more.

 

A full disaster recovery environment is obviously preferable for mission critical databases, as when disaster happens the environment can simply be ‘switched over’ with minimum downtime and disruption. The disaster recovery environment should be in sync with the production environment and should also be regularly tested. If a disaster recovery environment is not in place, backups need to be stored in a minimum of three separate locations to ensure that at least one recovery copy is available for restore.

 

Regardless of the recovery method, the processes involved must be clearly documented.  Listing the order of procedures, steps that need to be taken, the required turnaround times and who is responsible to ensure that all functions are fulfilled is essential. All parties involved should clearly understand their role. The failover processes must be regularly tested to ensure that when a disaster happens, these processes are seamless.  When testing, the failover processes should also generate a log to establish which ones are successful and which ones are not, allowing for the appropriate person to remedy.

Checklist Item #2: Address database security

Building security into the database is important, both from a physical and data perspective. This is addressed in various legislations including Sarbanes Oxley (SoX) and the King III guidelines to mention a few, making database security a compliance requirement. The requirement for database security is also extended to any backup copies of data and disaster recovery environments.

 

Physical security such as access control, intrusion prevention and detection, fire detection and suppression will help to prevent unauthorised persons from accessing the physical storage areas and minimise the impact of disasters such as fire. Data security must also be implemented to prevent unauthorised data access and theft from the corporate network.  This is critical given the rise in cybercrime. It is also important to ensure that the database itself and all backups receive the same protection levels. Without IT security, data can be lost, corrupted or more frequently in today’s world, stolen for sinister purposes. Data must be protected to prevent business downtime, which results in loss of revenue and reputation.

Checklist Item #3: Database administration

Whether you use an internal Database Administrator (DBA) or the services of an outsource provider, it is vital to be 100% comfortable with the DBA and the levels of support that are delivered.  The DBA has access to all company data and therefore must be highly trustworthy.

 

The service levels delivered must also be checked, as bad service both in-house and outsourced can negatively impact database downtime and cost the business. This can be addressed in a solid Service Level Agreement (SLA) and Operations Level Agreement (OLA). However, the DBA or outsource provider should maintain the backup strategy, the frequency of testing processes, the documentation and availability of this documentation as well as all planned failover testing. If these services are not being delivered, an organisation should question the value that the DBA is delivering.

Checklist Item #4: Check your SLAs

SLAs must fit the requirements of the business and should support disaster recovery and restore goals. The infrastructure of the database and recovery environment needs to allow for either a full disaster recovery failover to take place or regular backups which require, amongst other things, enough disk space. SLAs must factor this in and meet the specific disaster recovery needs of the organisation.

 

For example, an online e-Commerce store cannot afford to have any downtime due to the nature of their 24x7x365 business. Therefore, their SLA should include service levels that ensure maximum uptime and fast restore times with disaster recovery. Other businesses, such as a legal firm, may need to have their data restored within a few hours, or a day. This type of business won’t collapse if the data restore is completed within 24 or even 48 hours.  Therefore, the SLA must accommodate these factors and should also be in line with the disaster recovery strategy supported by the business rules and processes. If SLAs do not fall in line with business requirements, they need to be reassessed. However, it is also important to bear in mind that 99.999% uptime and fast recovery comes at a price. The balance of expense, functionality and best possible service levels to meet the business’ needs must be considered when defining an SLA.

 

In addition, the SLA should incorporate regular testing of the disaster recovery plan to ensure that it works, eliminating much frustration in the event of failure. 

Conclusion

Ultimately any disaster recovery solution minimises downtime.  Downtime costs money and this is often more expensive than the implementation of a full disaster recovery environment. If this is not possible, having a strategy in place is critical to ensure that processes are followed. Maintaining a stable database environment is equally important for business continuity.  A checklist that covers these aspects of database backup and recovery will help to mitigate risk, minimise downtime and ensure businesses are up and running in the shortest possible time in the event of a disaster.

Published in Storage & Data Centres
Monday, 29 October 2012 10:36

Polokwane Court fire reminds all businesses not to let records go up on smoke

Polokwane Court fire reminds all businesses not to let records go up on smoke

The recent fire that broke out at the Polokwane Magistrates Court destroying not only the building but computers and court documents, highlights the importance for any organisation to implement effective records management and information protection procedures to avoid the financial, operational and reputational repercussions of loss or damage to critical information in the event of a fire.

 

This is according to Leon Thompson, General Manager of Metrofile Records Management: Pretoria, a group company of JSE-listed Metrofile Holdings Limited, who says failure to securely store and back-up critical records means the organisation has to rebuild its database from scratch in the event of it being destroyed by a fire. “Reconstituting this information and records could take months, or even years, which can have a potentially devastating impact on by delaying, hindering or even permanently ceasing operations.”

 

Statistics from the UK Home Office reveal that 30% of all business that suffer from a major fire shut down within a year and 70% fail within five years. In addition to this, since 2000 there has been a 24% increase in the total cost of fire in the UK totalling £7.7bn worth of financial losses.

 

“Fire poses one of the biggest threats to any organisation that deals with documents on a daily basis. The legal system in particular faces a particularly high risk of losing vital documents and evidence needed for legal cases. Therefore, it is imperative that institutions such as court houses incorporate effective risk management programmes to protect these assets accordingly,” says Thompson.

 

He says that should an organisation have no data storage and recovery plan in place the potential business repercussions include, among others, financial losses, damage to brand reputation, costly litigation, job losses and total business inoperability. “Data is unquantifiable and therefore extremely difficult to insure, making it impossible to recover any financial losses.

 

“Organisations have two options; onsite or offsite records management. Companies that need immediate access to documents and records have no choice but to store them onsite, the key lies in implementing an effective back-up solution and storing the documents in a secure environment that can protect the records from damp, fire and water damage as well as insect infestation. Essential to this solution is effective fire detection systems.”

 

Thompson says offsite records management is becoming increasingly popular due to space constraints of storing records onsite as well as the costs involved with implementing the effective security measures needed to securely store records, including fire detection and prevention.

 

“Offsite records management entails the storage of company data and information in purpose-built facilities incorporating data protection which involves the securing of a backup data tape in an off-site vault. The location of storage facilities are specifically situated in low-risk areas where exposure to flooding, fires, earthquakes, flight paths or other natural disasters are least probable.”

 

Thompson says that a combination of both physical and online data backup provides the most comprehensive backup storage system. “The online disaster recovery site continuously mirrors the information stored to the records management storage system to avoid loss of data due to data corruption.

 

“Closure as a result of loss or damage to company data and information is becoming increasingly prevalent, yet this risk is so easily mitigated,” concludes Thompson.

Published in Storage & Data Centres
Read more...
Monday, 13 August 2012 11:46

Improving visibility and transparency to better manage virtualised data environments

Improving visibility and transparency to better manage virtualised data environments

Data is getting bigger, virtualisation is expanding, and data protection applications are ill-prepared to deal with the challenges this model poses. There is a distinct need within virtualised environments to improve visibility and transparency, as virtual machines simply lack the level of visibility seen in physical environments. Virtualised data needs to be protected at the same level as physical data, but in a manner that is fitting to its unique attributes. Organisations need to bridge the gap between old and new, between physical and virtual, and more effectively manage virtualised data centres.
As workloads housed in virtual machines grow increasingly complex, the challenge for organisations lies in maintaining the same degree of security and protection as physical environments enjoy, while still leveraging the benefits of virtualisation. This requires a high level of visibility into the virtual environments, as organisations need to be able to view the virtual environment in order to better secure, protect, backup and recover these virtual machines.
However, there are several unique challenges involved here. Organisations need to avoid backing up large amounts of redundant data, as this can waste storage space and cause unnecessary expense and reduced performance. Enterprises also need to secure virtual machines while they are operational, without sacrificing on performance, and need to have a clear picture of exactly how many virtual machines are running in their environment for security and control purposes.
Solutions for managing virtualised data environments need to have certain features that lend themselves to this environment. They need to be intelligent, able to identify duplicate files to avoid unnecessary redundancy, and able to identify the files that have already been scanned and have not changed so that these files can be skipped, improving performance. IT management solutions also need to be able to quickly and identify all virtual machines active on a LAN. They should also enable organisations to backup virtual machines as entire virtual machines without sacrificing file recovery and without taxing the infrastructure. Furthermore, these solutions should automate the protection of virtual machines, regardless of their location, without hindering operations or infrastructure. The de-duplication of both virtual and physical machine data also should be able to be conducted globally and stored in a single pool.
However, while these features are critical for virtualised environments, this does not necessarily mean that a new backup and security solution is required. These features can also be applied to improve data management within physical environments. A single solution that enables improved management across both virtual and physical environments will provide the highest level of benefit with optimised cost, helping to further bridge the gap between the physical and the virtual.
Backup and storage solutions should also extend beyond data protection and storage. Improved visibility increases an organisation's ability to see into the security of virtual environments. Security for virtualised environments has similar requirements to physical environments, but with additional constraints, including the capability to secure the virtual infrastructure while using less memory, less CPU power and less disk input/output so that performance is not negatively impacted by security.
This again requires a certain level of intelligence. Features to look for include the ability to see the risk posed by a file on a virtual machine without opening or scanning it, reducing scan overhead, and the ability to separate physical clients from virtual ones and automatically apply relevant security policies. De-duplicated file scanning further reduces scan overhead by identifying which files have been scanned, and sharing those results across the virtual environment to ensure they are not rescanned unnecessarily. It is also important to for security solutions to be able to scan dark images, in other words scan files on virtual machines that are offline, and ensure that virtual machines are updated and fully compliant with security policies before they can access network.
Gaining greater insight into the virtual environment for improved management and security has a number of tangible benefits, including simplified management, lower IT operational costs and improved IT agility. Ensuring that solutions cover backup, storage and security and apply to both types of environments further extends these benefits, enabling the management of data protection in both physical and virtual systems through one console, simplifying processes and consolidating tools.
With the explosion of virtualisation, adequate data management and protection is critical. Organisations need to ensure that they implement best of breed backup, storage, and security to not only manage but accelerate virtualisation of critical business applications, leveraging the benefits of virtualisation while minimising the risks.

Published in Security
Read more...
Thursday, 05 July 2012 14:46

From luxury to commodity to absolute necessity - the consumerisation of digital storage

Hard Drive

The world’s first hard drive, shipped in 1956, was the size of two refrigerators, held just 5MB of data and cost an incredible $10 000 per megabyte. By today’s standards this would not even hold a single high quality MP3 format song, could probably only hold one or two photographs, and the drive would take up more space than the average office cubicle. Storage has come a long way in less than 60 years, becoming smaller, more portable and higher in capacity, and the price per gigabyte has continued to drop, from $300 000 in 1981 to less than 10 US cents in 2010.

Digital storage has evolved from a rarity to a luxury and from a luxury to a consumer commodity, driven by constant connectivity and a lifestyle that drives digital content generation. One could even go so far as to say that storage is a necessity, since much of our lives are now online and losing entire hard drives full of data could be a disaster for business and personal users alike.

Driven by more available connectivity and evolving technology, file sizes are only getting larger, and high definition photographs, music, video and multimedia are becoming easier to download and share.  These files all need to be securely stored and backed up. Users are also becoming increasingly mobile and want to be able to take their digital files with them wherever they go. The trend of smaller, more portable, cheaper hard drives with greater capacity than ever is one that is set to drive the future of storage, as content generation and sharing continues to grow and files sizes increase.

The sheer number of digital devices available to consumers is also contributing to the growth in demand for storage. Smart phones allow users to take photographs and share them, receive emails, even create videos, and tablet PCs enable all of this and more. Users also want to be able to share this content online, which means that whoever is hosting social media sites will also need more and more storage. Content creation and the social media revolution is driving demand for consumer storage, and the average home now requires in the region of 1TB of storage, compared to a few hundred gigabytes a few years ago.

However, as more and more data is stored digitally, the need for backup storage has also grown. Users who store their music, movies, photographs and more in a digital format need to ensure that this data is also securely backed up, whether this is on a portable hard drive, a desktop hard drive, or increasingly in a network attached storage environment that also enables personal cloud storage for access anywhere anytime.

When it comes to storage, mobility is key. Portable storage enables users to take their files anywhere, but networked cloud storage allows them to keep their files in a centralised facility and access them from a variety of different devices. And while portable drives are increasing in capacity, this space still remains limited, and the Cloud is becoming an increasingly attractive option. Networked storage with personal cloud capability allows the best of all worlds, with access to centralised storage from multiple devices without the security concerns of the public cloud. Networked storage also enables users to store content on a central drive and then share this content wirelessly for sharing and streaming of videos, music and more onto televisions and other smart devices. This convergence and the emergence of the connected home is again a result of the proliferation of content creation and the current lifestyle of sharing and collaboration.

Once thing is certain, digital storage is here to stay, and we can fully expect it to follow the same trend as it has for the past 50 years and more. We will continue to see more storage, in a more portable format, with greater speeds and at a more affordable price. Storage is no longer a luxury, but a necessary commodity that keeps business and consumers connected to their world.

Published in Hardware
Read more...

The SA Leader Magazine

Cover sml

In the November issue

When it comes to big data, how big is big?


Economic outlook – hoping for business as usual


Talent management is a C-suite priority


Cyber Security is an ‘Invisible War’ that needs attention

Subscribe

Copyright © 2013 gdmc (Geoffrey Dean Marketing Corporation cc). All rights reserved. Material may not be published or reproduced in any form without prior written permission. Use of this site constitutes acceptance of our Terms & Conditions and Privacy Policy. External links are provided for reference purposes. SALeader.co.za is not responsible for the content of external Internet sites.

Login or Subscribe