A+ A A-
Tuesday, 22 October 2013 12:05

The changing role of resellers

The changing role of resellers

Educating customers on backup is critical for data protection

Data has become the lifeblood of any organisation, and with an increasing shift away from hardware towards a more service-oriented market, the role of the reseller when it comes to backup has changed dramatically.  A ‘box drop’ approach is no longer sufficient, given the critical nature of data. Resellers of data protection solutions now have a responsibility to their customers to educate them on backup solutions and practices and ensure business continuity by making certain that their customers can recover effectively in the event of a data issue.

 

While most large organisations have realised the critical nature of effective backup and recovery solutions, the Small to Medium Business (SMB) market still relies heavily on memory sticks, external hard drives and other ad-hoc backup processes, if they have any such processes in place at all. However, these backups are often not regularly checked, and only when a data issue occurs and a restore is necessary do the problems with this method become evident. In addition, when backups are recovered from such devices, it is usually difficult or impossible to recover just the missing data, and only the data from the last backup can be restored. This usually results in work since the last backup being lost.

 

Irregular or infrequent backups often go hand-in–hand with users not checking their backed-up data for integrity and the ability to recover it as and when necessary, which can cripple the business in the event of data not being recoverable. Without this critical data, many SMB organisations simply cannot recover, leading to lost income and even the closure of the business itself. With approximately 97% of all data restores necessitated due to hardware failure, hard drive malfunctions or data corruption, the need for end users from all sizes of business to move to automated backup environments is clear.

 

Resellers of these solutions are in a favourable position to educate their end user customers on the benefits of automated backup and the repercussions of not having a plan or process in place. Many businesses, particularly in the SMB space, do not have the expertise or capacity to adequately manage backup and recovery on their own. Added to this, the research required to find a solution that is ‘fit for purpose’ has proved onerous in the past, leading to poor backup practices that can cause problems further down the line.

 

As providers of backup solutions, these resellers understand the market, the challenges, and the needs of their customers, and are also able to offer a managed service that delivers more comprehensive backup and recovery. With the evolution of technology, there is also a wider range of solutions on offer to deliver fast, efficient and above all automated backup to protect vital data.

 

There are now a host of best-of-breed solutions available for businesses of all sizes, addressing backup from the level of individual PCs right up to servers and entire data centres. The growth of the cloud, and increased trust in cloud solutions, has also provided another avenue for resellers to offer remote backup solutions, which store data securely offsite in the cloud, meeting best practice guidelines and ensuring always-available data recovery.

 

In order to take advantage of new opportunities and provide better customer services, resellers need to make the leap from selling products to providing solutions and services that deliver value to their customers. The onus is now on resellers to take this proactive step, do their research and find the right products, including cloud or hosted platforms, to adopt and sell on to their customers. This not only opens up new revenue streams, but delivers immense satisfaction in knowing that customers’ data is secure and properly backed up. Resellers have the opportunity to become trusted partners and reinforce relationships, strengthening their own business while helping their customers at the same time.

 

By taking on this new role, not only are resellers able to take on a more strategic position in an IT world dominated by the cloud, they are also able to benefit from improved credibility and annuity revenue that results from selling solutions and advice rather than simply products.

Published in Storage & Data Centres
Monday, 29 October 2012 10:36

Polokwane Court fire reminds all businesses not to let records go up on smoke

Polokwane Court fire reminds all businesses not to let records go up on smoke

The recent fire that broke out at the Polokwane Magistrates Court destroying not only the building but computers and court documents, highlights the importance for any organisation to implement effective records management and information protection procedures to avoid the financial, operational and reputational repercussions of loss or damage to critical information in the event of a fire.

 

This is according to Leon Thompson, General Manager of Metrofile Records Management: Pretoria, a group company of JSE-listed Metrofile Holdings Limited, who says failure to securely store and back-up critical records means the organisation has to rebuild its database from scratch in the event of it being destroyed by a fire. “Reconstituting this information and records could take months, or even years, which can have a potentially devastating impact on by delaying, hindering or even permanently ceasing operations.”

 

Statistics from the UK Home Office reveal that 30% of all business that suffer from a major fire shut down within a year and 70% fail within five years. In addition to this, since 2000 there has been a 24% increase in the total cost of fire in the UK totalling £7.7bn worth of financial losses.

 

“Fire poses one of the biggest threats to any organisation that deals with documents on a daily basis. The legal system in particular faces a particularly high risk of losing vital documents and evidence needed for legal cases. Therefore, it is imperative that institutions such as court houses incorporate effective risk management programmes to protect these assets accordingly,” says Thompson.

 

He says that should an organisation have no data storage and recovery plan in place the potential business repercussions include, among others, financial losses, damage to brand reputation, costly litigation, job losses and total business inoperability. “Data is unquantifiable and therefore extremely difficult to insure, making it impossible to recover any financial losses.

 

“Organisations have two options; onsite or offsite records management. Companies that need immediate access to documents and records have no choice but to store them onsite, the key lies in implementing an effective back-up solution and storing the documents in a secure environment that can protect the records from damp, fire and water damage as well as insect infestation. Essential to this solution is effective fire detection systems.”

 

Thompson says offsite records management is becoming increasingly popular due to space constraints of storing records onsite as well as the costs involved with implementing the effective security measures needed to securely store records, including fire detection and prevention.

 

“Offsite records management entails the storage of company data and information in purpose-built facilities incorporating data protection which involves the securing of a backup data tape in an off-site vault. The location of storage facilities are specifically situated in low-risk areas where exposure to flooding, fires, earthquakes, flight paths or other natural disasters are least probable.”

 

Thompson says that a combination of both physical and online data backup provides the most comprehensive backup storage system. “The online disaster recovery site continuously mirrors the information stored to the records management storage system to avoid loss of data due to data corruption.

 

“Closure as a result of loss or damage to company data and information is becoming increasingly prevalent, yet this risk is so easily mitigated,” concludes Thompson.

Published in Storage & Data Centres
Read more...
Friday, 12 October 2012 10:44

Zero outage computing in digital clouds

Zero outage computing in digital clouds

The cloud is everywhere. And it is the main topic of discussion at IT conferences and trade shows. Nevertheless, a number of business enterprises are still sceptical when it comes to security and availability requirements in cloud environments. Cloud providers are responding to these worries with the zero outage strategy.

 

The seriousness of the matter became evident during CeBIT in March 2012: Facebook suffered a major outage and was unavailable for hours. Millions of users worldwide could not access the social network due to technical problems. Today mobile applications for smartphones and tablets are also at risk.

 

Outages of this magnitude can be very costly. In 2010 the Aberdeen Group surveyed 125 enterprises worldwide and discovered that outages of just a few minutes per year can cost an average of USD 70,000. Surprisingly, only four percent of the businesses surveyed had guaranteed IT availability of 99.999 percent. This should be unsettling, especially since experts claim that one hour of downtime in production costs some USD 60,000, and for an online shop the figure is USD 100,000. Banks are at the top of the list. They can lose up to USD 2.5 million in one hour of downtime.

 

Zero outage is only possible in private clouds

To win the trust of cloud sceptics despite these kinds of worst case scenarios, external data centre operators are striving to implement consistent management of their IT systems based on a zero outage principle. This includes high availability of services which, according to a definition by the Harvard Research Group, means that systems should be running at an availability level of 99.999 percent – that translates into one outage lasting a maximum of five minutes per year. The only exceptions to the principle of "zero outage computing" are agreements made with customers that govern new releases, updates or migrations. But are such high levels of availability realistic, and if so, how can they be achieved and maintained?

 

Those attempting to provide the perfect cloud must be able to discover errors or failures before they arise – and take every technical step possible to prevent them from occurring. What's more, the cause of every possible failure must also be carefully analysed. It should be noted that more outages result from software issues rather than problems in the cloud architecture itself. And there are a number of inherent differences – for example, users should not expect zero outages in the public cloud, which by nature is in the public Internet and susceptible to downtime. The trade-off for that are the many services offered at no charge in the public cloud. You can have almost limitless gigabytes of storage capacity without having to pay for it. However, you will have to do without support services.

 

Multiple security

But things are much different in the private cloud: Using their own individually designed end-to-end network solutions, providers can guarantee high availability if their ICT architectures are based on fault resilience and transparency, with integrated failure prevention functions and constant monitoring of operations and network events. What's more, having intelligent, self-healing software is also essential, enabling automatic rapid recovery in critical situations without any manual intervention so that system users are able to continue working without noticing any kind of interruption.

 

One example of high fault resilience are RAID (Redundant Array of Independent Disks) systems. They automatically mirror identical data in parallel on two or more separated storage media. If one system fails, this has no impact on the availability of the entire environment – because the mirrored systems continue running without interruption. The user is completely unaware of any issues. In addition, RAID configurations have early warning systems, and most of the incidents that occur are automatically corrected without the need for support from a service engineer.

 

However, the so-called SPoF (single point of failure) is especially critical for the overall IT environment. These SPoFs include individual storage, computing or network elements, installed only once in the system, that can completely shut down operations if they fail. Since mirroring these components is relatively expensive and complex, some IT providers do not install mirrored configurations – and that is extremely risky. But with zero outage this risk must also be eliminated. Zero outage also means safeguarding the data centre against a catastrophic failure through the use of a UPS (Uninterruptible Power Supply).

 

If one application fails, however, there will be a processing gap, for example in the form of lost transactions, no matter how fast operations are shifted to an alternate system. The failed system must be able to automatically take action to fill this gap by repeating all of the processing steps that were skipped at a later time, after the shift to the alternate system.

 

Data protection is just as important

The seriousness of the matter and urgency to mitigate the risk of data loss or leakage is evident in the South African market from the requirements for full disaster recovery and fail-over capabilities in solutions. In many cases organisations look to cloud solution providers for an IT business continuity solution. The solution is, however, not in using cloud services for disaster recovery but to source cloud solutions that have disaster recovery capabilities engineered into the solution.  

 

The same awareness and requirement for data protection is seen in the regulatory developments that applicable to sourcing IT services and specific cloud services. The Protection of Personal Information act (POPI) and King III, relevant to the South African market is becoming a major consideration when sourcing IT services and looking for a provider who is compliant with the relevant acts or frameworks. In addition, existing related certification such as ISO 27001 and Sarbanes Oxley (SoX)/ Statement on Auditing Standards 70 (SAS 70) compliance, should be mandatory when considering a cloud service provider who views data protection as critical part of their solution. 

 

Quality needs dedicated employees

Cloud providers must make sure that their employees adhere to the same standards and processes at all locations and even across multiple time zones. Studies indicate that more than 50 percent of all outages are the result of human error. That is why training is being focused on quality management as a basic integral element of company culture. This approach requires a central training plan, globally standardised manuals and comprehensive information provided by top management.

 

Every employee must do everything possible to prevent a potential failure or incident from even happening. And that also means having an understanding of what causes outages. They should act in accordance with the old saying "fire prevention is better than fire fighting." If the worst case should ever occur, employees must not be afraid to admit their mistakes, so that they can avoid making them again in the future. It is also vital to have a centrally organised specialist team that is ready to go into action, finding solutions to problems that arise unexpectedly and implementing these solutions throughout the enterprise. When faced with a serious outage, the shift manager can quickly call the team together to begin the recovery process. Employees working at the affected customer site can follow the action being taken via a communications system.

 

Quality management is an ongoing process ensuring that required knowledge is always systematically updated and expanded. It will never really be possible to guarantee zero outages in cloud processes – not even the best in class can do this – but delivering system availability that goes beyond 99.999 percent can be achieved. Businesses can be sure of this by concluding service level agreements with their service providers.

Published in Mobile
Read more...
Two choices to leap the information-quality-in-the-cloud hurdle

The fact that more business information is moving into the cloud or that the cloud is being used to store an increasing share of business data is not news. Nor is it news that one of the biggest challenges is ensuring the good quality of that information in line with enterprise norms.
 
One of the challenges that still remains, however, is ensuring good quality information when it resides in off-premise, cloud-based systems.
 
The hurdle has always been that organisations storing information in the cloud have been subject to service level agreements (SLAs) with their service providers that focus on access availability, speed of delivery, data recovery and security, but never on maintenance or watchful and responsible care in accordance with enterprise processes, procedures and practices. The result is that the information can never be fully trusted.
 
Integration and quality concerns, which underscore the difference between trusted or not, require both cloud and on-premise information and content be subject to the same standards. They must endure the same rigours, the same exchange protocols, integration and quality processes, domain-respective business rules and so on, to ensure that all information is uniformly managed in a standardised manner.
 
That may be achieved by mapping data between on-premise systems and those in the cloud and, if so, must be done through a standard, common set of logic and rules that are implemented to govern the information and content. That architecture will result in a compromise in processing and storage performance which is inevitable in any type of exchange and should not constrain the design to the extent that management of unstructured cloud information and content is largely ignored.
 
Another approach is to exchange only the information or content that is required by applications and users for queries or reports. It keeps network traffic and storage requirements to a minimum. Virtualisation and federation technologies can be exploited because they do not physically move all the information or content from their place of origin but rather they reference them and only the requisite bits are actually copied. That offers another enormous advantage: the information and content are left intact at source and managed by the local standards and security.
 
Failure to resolve this issue once a cloud-based architecture is adopted will exacerbate storage and duplication issues that will inflame lack of trust in business systems. Experience shows that when that occurs the speed, flexibility, and accuracy of information supply to business users breaks down with the result that organisations become inflexible, lethargic in the face of rapid market shifts, and spiral into margin depreciation.
 
More companies face the dilemma of which solution they will turn to. In an October 2011 report, IDC VP for storage systems, Richard Villars, stated that companies worldwide spent $3,3 billion on public cloud-based storage in 2010. He projected the compound annual growth rate at 28,9% which put the global spend at $11,7 billion – last year. By comparison, the total spend in 2010 for on-premise storage was around $30 billion, which puts IDC's forecast for cloud-based storage by 2015 ahead at more than $37 billion. Interestingly, IDC's report projects service providers will increase their spend from $3,8 billion in 2010 to $10,9 billion by 2015.
 
So, once the cloud is incorporated into enterprise information strategies, regardless of which option companies choose, many more are facing the challenge, as there will always exist a growing need to expand on existing on-premise information management processes and capacity to accommodate external cloud information and content.

Published in Analytics & BI

The SA Leader Magazine

Cover sml

In the November issue

When it comes to big data, how big is big?


Economic outlook – hoping for business as usual


Talent management is a C-suite priority


Cyber Security is an ‘Invisible War’ that needs attention

Subscribe

Copyright © 2013 gdmc (Geoffrey Dean Marketing Corporation cc). All rights reserved. Material may not be published or reproduced in any form without prior written permission. Use of this site constitutes acceptance of our Terms & Conditions and Privacy Policy. External links are provided for reference purposes. SALeader.co.za is not responsible for the content of external Internet sites.

Login or Subscribe