iland cloud infrastructure blog
Insights, tips and strategies on virtualization and cloud computing
Site Recovery Manager: Five Steps to Replicate Virtual Machines using Replication Seeds or Physical Courier
Thursday, May 16, 2013
To ensure efficient replication of virtual machines (VM), there are times when, due to lack of bandwidth and large virtual disks, carrying out an initial online synchronization is simply unfeasible. When that situation occurs you can download the virtual disks that make up a VM to media such as a USB drive, and then have the data securely couriered to the target iland Recovery Site.
The process is simple:
- Copy your VM(s) to external media and transport to the iland Recovery Site
- As long as the directory and file naming is the same, Site Recovery Manager will sense that it is the same VM and use that to seed the replication
- Confirm the directory in the targeted datastore and click ‘Yes’ to complete the configuration replication wizard.
The process begins by taking a maintenance window on the VM, and powering it down. You can then use the vSphere datastore browser or a 3rd party tool such as Veeam to download the virtual disk to some type of removable media. For the purposes of this blog post, Veeam will be used in the replication example set out below. Simply follow these steps:
Step 1. To begin, download and install Veeam Backup Free Edition. Then identify both the VM to be replicated and the host that has the VM in the Protected Site. The next step is to identify the datastore location of the VM to be replicated, launch Veeam, and connect to the identified ESXI host.
In this example, the VM to be couriered is SRM-Demo. SRM-Demo files are in datastore “500GBSATA-VOL2”. You will need to take a snapshot of the VM SRM-DEMO before starting the copy process.
Step 2. Access the “500GBSATA-VOL2”datastore and select right click on the SRM-Demo folder to be copied. Drag and drop to the designated drive
Step 3. Once the VM has completed downloading to the removable media, attach the media to a workstation in the Recovery site network and upload the files to the Recovery site hosts and datastore.
Step 4. You can then right click the VM to be replicated at the Protected site to launch the vSphere Replication wizard.
Step 5. Confirm the datastore locations of the VM to be replicated by vSphere Replication.
If a file with the same name exists, vSphere Replication prompts you with a warning and offers you the option to use the target disk as a seed for replication. If you click ‘Yes’, vSphere Replication compares the differences and replicates only the changed blocks after the VM replication is fully configured and enabled. If you do not accept the prompt, then you must change the target location for your replication.
Finally, complete the wizard and replication is enabled.
The benefit of replicating data in this way is that your bandwidth requirements from the protected site to the recovery site are significantly reduced and therefore costs are lowered. Typically organizations use either physical courier or replication seeds when there is a concern for bandwidth consumed during an initial copy of a virtual machine across the WAN. The replication seeding option with vSphere Replication allows iland customers to overcome this limitation with “seeding” a copy of the Virtual Machine VMDK files in the remote datastore and synchronizing the changed blocks at the DR datastore.
Wednesday, April 24, 2013
You're mulling over cloud providers and gathering information on the speed of their processors, the cost of their RAM, the scalability of their SAN storage, etc. But you may have overlooked an important component during your research: the carriers used by each cloud provider.
Bandwidth carriers come in three tier categories with the classification of the tiers defined by geographic coverage, traffic volume, and number of routes. A Tier 3 carrier would typically be regional in both size and routes. A Tier 2 carrier peers with some networks, but also purchases IP transit or pays settlements to reach at least some portion of the Internet. A Tier 1 carrier has a national or international presence and carries a substantial volume of traffic over the overall internet with full routes available through their peering relationships.
(Credit: http://www.cedmagazine.com/articles/2011/02/crashing-the-tier-1-party on image)
The following Tier 1 characteristics are from an IDC study on “ISPs classifications”. The key attributes of Tier 1 ISPs are as follows:
- They have access to the entire Internet routing table through their peering relationships.
- They have one or two AS numbers per continent or, ideally, one AS worldwide.
- They own or lease international fiberoptic transport.
- They deliver packets to and from customers and to and from peers around the world.
Global Tier 1 ISPs have two additional characteristics:
- They peer on more than one continent.
- They own or lease transoceanic fiberoptic transport to facilitate the best possible customer access experience in diverse markets on more than one continent.
What does this mean for the regular customer?
Tier 1 network providers offer reduced latency and improved connectivity because there are fewer hops for your traffic to traverse. The result is that end users and customers benefit from stable and predictable connectivity when accessing from remote or office locations.
Traffic on Tier 1 providers is routed with greater intelligence which reduces the likelihood of service disruptions causing routing errors. End users and customers benefit from high levels of uptime and access to resources ensuring business activities occur unhindered.
Tier 1 providers offer national and international connectivity capabilities. End users and customers benefit from seamless peering nationally and internationally with more packet routing and shaping intelligence features.
Server uptime and resource capacity are incredibly important features for a cloud provider to offer, but are an incomplete solution on their own. iland uses multiple Tier 1 providers at all our global facilities. So as you work your way through your due diligence for a cloud provider, make sure that the provider you choose uses network carriers that ensure the highest quality of service that allow your applications and services to be delivered as efficiently and reliably as possible.
Thursday, April 11, 2013
In this fourth part of our series of posts on Site Recovery Manager (SRM) I identify the steps required to enable and monitor vSphere replication (VR). Historically, replicating data from one site to another was typically a costly endeavor. For decades, array-based replication technology was the de facto replication platform but, since the introduction of VR, that is no longer the case.
VR allows you to replicate virtual machines from your primary site to your secondary site in the iland Cloud. It is asynchronous-based and allows you to set your recovery point objective (RPO) as low as just 15 minutes – a very short time between replications. However, although a shorter RPO means less data is lost during a recovery more network bandwidth is consumed in keeping the replica up-to-date. In addition, the primary and secondary sites must be connected for you to be able to configure and enable replication.
VR is built for virtual machines only and replicates on a per virtual machine basis. It cannot replicate templates, physical RDMs or ISO files and it cannot replicate powered-off virtual machines. Replication begins once the virtual machine is powered on. Let’s examine how to replicate a virtual machine with VR:
To enable VR on a virtual machine follow these steps:
1. Ensure that the selected virtual machine is powered on; then right-click it and select 'vSphere Replication'
If you want to protect multiple virtual machines, switch to the VMs and Templates view, select a folder and then select the Virtual Machine tab. Select multiple VMs using either Ctrl-Click or Shift-Click; once your selection is complete, right-click and choose 'vSphere Replication'
2. You can set an RPO to determine the period of time between replications. For example, an RPO of 1 hour seeks to ensure that a virtual machine loses no more than 1 hour of data during the recovery.
Guest OS Quiescing types are determined by the virtual machine’s operating system. Microsoft VSS quiescing is supported for Windows virtual machines running Server 2003, XP or newer. VR does not support quiescing for Linux and older versions of Windows such as 2000. VR supports file system level quiescing for Windows 7 and Windows Server 2008 and application-level quiescing for Windows Server 2008 and Windows Server 8 operating systems.
Target File Location - you specify the target location for your virtual machine. You can either replicate the whole virtual disk initially or use the existing disk as a replication seed.
3. Leave the defaults and click Next.
4. Leave the default for the VR Appliance and click Next.
5. Click Finish to start the Replication.
6. Go back to Home, select SRM and click vSphere Replication. Select the VR appliance on the Target and confirm the replication status of the VM being replicated.
Site Recovery Manager is enabling increasing numbers of organizations to integrate disaster recovery in the cloud as part of their business strategy. Part 5 in my SRM blog series will examine the challenges of migrating large amounts of data to your secondary site in the iland cloud.
Tuesday, April 2, 2013
vSphere replication enables customers to replicate virtual machines from one location to another using VMware as the primary engine, without the need for third party storage array-based replication. iland leverages vSphere replication to provide a replication process that is cost effective and easily available to companies within the SMB sector where expensive storage arrays and even more expensive array-based replication is typically beyond their budgets.
This replication alternative is also heavily employed by enterprise customers because it is unlikely that all the sites for an enterprise use exactly the same array vendor in both the Protected and Recovery sites. Consequently, enterprise customers are attracted to vSphere replication because of its ability to enable protection to take place between dissimilar arrays.
Additionally storage teams in large environments often find it takes more time than desired to enable replication on the right volumes/LUNs. vSphere replication however, shortens replication times considerably, empowering VMware administrators to protect their virtual machines as and when they see fit.
Because vSphere replicationis protocol-neutral, iland customers are easily able to migrate from one storage protocol to another. For example, vSphere replication allows replication between Fibre Channel-based storage to NFS based storage.
How does vSphere replication make migration between two different storage protocols possible?
vSphere replication is an extension of vCenter that provides hypervisor-based virtual machine replication and recovery. vSphere replication deploys as a 64-bit virtual appliance packaged in the .ova format. It has a dual core CPU and two virtual disks: 2GB and 10GB. The virtual appliance must be deployed in a vCenter Server environment using the OVF deployment wizard.
For the replication process to occur, vSphere replication sees only the datastores and does not interface directly with the storage protocols that the ESXi host sees. Instead, the vSphere replication appliance communicates to the agent on the ESXi host which then transfers data to the vSphere replication appliance. This allows for the protection of virtual machines even if local storage is used - which is an attractive proposition for a client where direct attached storage is more prevalent.
SRM Replication in the iland Cloud
For successful replication to the iland Cloud, the vSphere replication appliance must be deployed on the primary and secondary sites in the virtual environment. After the vSphere replication appliance is deployed it is integrated with the vSphere infrastructure, enabling virtual machines to be replicated from one site to another.
vSphere replication allows the replication of individual virtual machines on any datastore (local, FC, iSCSI, or NFS) to any other datastore in the iland Cloud. It is a feature that is beneficial to iland customers because it removes the requirement to match storage arrays at both sites
vSphere replication is storage protocol neutral and storage vendor neutral, so as long as the storage being used is supported by VMware’s general HCL there are no issues replicating to the iland Cloud.
vSphere replication provides several benefits:
Data protection at lower cost per virtual machine
A replication solution that provides flexibility with storage
Overall lower cost per replication
Choosing the virtual machine to be replicated at a different point objective
Friday, March 1, 2013
“From a development perspective, if someone comes to me to request hardware my first question to them is ‘Why would you need to buy hardware rather than use the iland cloud?’” These days that’s what Abdul Hummaida, Development Project Lead at AppSense asks of developers every time they request traditional IT resources at UK-based AppSense. A simple question but one that reflects the significant transition AppSense has made since moving its development and some of the company’s production systems to VMware provider, iland Cloud Infrastructure.
AppSense focuses on the creation of complete user virtualization that enables users to seamlessly transition between electronic devices giving them the flexibility to work in any way they want. The company is growing rapidly but sustaining a 30% year-on-year growth rate has its challenges – one of them being the huge demand for the company’s software that was driving the requirement for faster development and testing at significantly accelerated rates. As a result, the company needed a way to execute massive and aggressive scalability testing coupled with high performance and availability.
To meet the challenges AppSense considered building it’s own datacenter but as Abdul says, “The problem was that the company didn’t need all of that hardware all of the time. Maybe once or twice a month but the rest of the time it would be standing idle – something that was problematic from a CAPEX perspective. We needed the infrastructure in bursts when testing demanded it which meant that for the rest of the time the hardware would be unused.”
Cost savings, scalability, performance, speed to market and collaboration. Just some of the benefits AppSense is reaping from its transition to iland’s cloud – known internally within AppSense as Project Silver Lining.
Fig. 1: Project Silver Lining
But it doesn’t stop with the company’s Research and Development teams. As other departments hear about the flexibility and ease with which they can use the cloud for their workloads, more internal requests are made to Abdul to allow them to take advantage of iland’s cloud too. Download the full AppSense use case to find out more about the company’s decision to move to iland.
Thursday, February 28, 2013
When it comes to moving data to the cloud, at iland we’ve seen a host of different data permutations and combinations our customers have requested. It’s always been a tremendous challenge to correctly predict server workloads and capacity requirements a customer will need once they’ve moved to a cloud environment and the type of IT configuration required to support it. And admittedly we’ve had some instances where we’ve raised our eyebrows at what a customer wants to move to a cloud environment. However all that has changed with the availability of the Dell® Performance Analysis Collection Kit or DPACK.
DPACK is a complimentary tool offered by iland that gives you a true sense of your current IT environment and allows you to identify areas that you can further optimize. It’s available to organizations that are already virtualized or are looking at becoming virtualized. At iland for example, it can be used to determine the size of your disaster recovery environment or your production environment.
DPACK can run in memory on any server in your environment and requires network connectivity to only those servers you want to collect data from. The data it gathers includes core information requirements such as disk IO, throughput, capacity, and memory utilization. The result is an in-depth view of server workload and capacity requirements that are used to make recommendations on what type of IT configuration and sizing will be required to support the workload and capacity of the servers being considered for your cloud-based environment whether for disaster recovery or production.
DPACK generates two kinds of reports:
An aggregation of resource needs across disparate servers with a simulation of those workloads if consolidated to shared resources.
- An in-depth individual server report that is used by IT administrators to search for potential bottlenecks or hotspots that need to be removed from a new design configuration.
Using the data collected, an iland Solutions Architect can help you look for ways to optimize your datacenter and plan for upcoming critical projects. DPACK is available as either Windows or Linux options and can report on up to 256 Disks/Servers in your environment at once.
With the use of DPACK and the support of the iland team, you can remove the guesswork out of understanding system performance and gain quick and knowledgeable insight into your IT environment that will help you control spending while making the right decisions for your business. So the question “You want to move WHAT to the cloud?” simply becomes obsolete!
Thursday, February 21, 2013
There is an immutable dilemma occurring at countless datacenters around the world: with companies growing, business demands changing, and product development increasing companies are asking themselves - should we invest in additional resource capacity or consider cloud usage for the additional demand placed on our systems?
The question is often not as straightforward as it may seem due to the many complexities attached to both options. Adding additional resource capacity within an existing datacenter footprint usually entails capital expenditure on physical components such as servers, networking equipment, and storage, as well as the in-house or consulting expertise necessary to incorporate and manage the additional components. This is, of course, if you have available cabinet and power capacity for the new gear in the first place. The end result is additional resources that will be utilized by your company in a similar manner to your original deployment: secure, low latency, private to your company. Of course the flip side is you also now have additional hardware components to manage, maintain, and depreciate as the two eventually become end of life.
Is a traditional public cloud any better?
Public clouds are known for “elasticity” and consumption based resources. And many public cloud vendors are known for proprietary front-ends or custom hypervisor layers that make migration to and from their infrastructure a complex process that requires multiple steps or rebuilding. Often a company’s IT team must re-learn the offsite hypervisor management process and treat the cloud resources as a separate entity. The end result is additional resource capacity with little to no CAPEX but with the compromise of introducing new management tools, complexity of remote access, and “disjointed” usage of the combined infrastructures.
iland has been providing cloud infrastructure for over 5 years from 7 global datacenters. During that time we’ve worked with a wide variety of customers ranging from Fortune 100 to SMBs and have found that the vast majority of customers are looking for the same thing from cloud resources: integration. Companies looking at cloud resources are not looking to re-learn technology, they’re not looking to stitch together networks and re-train management and end users on how to utilize resources, and they’re certainly not looking to throw away investment and knowledge of local VMware configuration and utilization to work within a non-compatible environment. Our findings have led us to embrace the concept of “Hybrid Clouds” in which a customer has the ability to expand their local VMware infrastructure to a remote iland Cloud Services facility and access and manage those resources with the tools and methods they were already using in-house (this can include their existing vCenter Server with vSphere Client plug ins).
iland recently launched iland CloudConnect, which is another step towards our goal of unified integration. iland CloudConnect offers companies using datacenter space within top tier providers such as TeleCityGroup and others, to extend layer 2 connectivity from their local switching infrastructure to iland Cloud Services. iland CloudConnect is an incredible tool for providing super-low latency access to cloud resources with the security and convenience of maintaining the same internal IP subnet across both sites. And with a choice of commit speeds you can select the bandwidth that you need rather than being forced into a speed minimum that is often much more than required. VMware-based virtual machines can be copied and managed in iland Cloud Services using the customer’s existing vSphere client (via vCloud Connector) and machines have full layer 2 visibility between both sites for normal usage. In other words you can extend the investment you’ve already made in your VMware environment to access a secure enterprise cloud rather than replacing it with a completely new system.
You have many decisions to make when considering internal expansion and cloud consumption. iland Cloud Infrastructure with iland CloudConnect provides a high level of integration between your local VMware resources and our enterprise infrastructure. By allowing for native VMware management and access across existing network segments your team can use cloud resources as they are meant to be used: additional functionality and capacity without the pains of re-learning hypervisor technology or incurring CAPEX expenditure.
Learn more about iland CloudConnect in TelecityGroup datacenters.
Wednesday, February 20, 2013
iland will be exhibiting and speaking at VMware Partner Exchange 2013 in Las Vegas next week. If you’re attending the event, stop by and see us in booth # V11 where we’ll be demonstrating iland Cloud ServicesTM.
If you’re attending the show and you’d like to find out more about us feel free to visit our booth so you can see a demo and talk to one of our sales engineers. Alternately, if you’d like to arrange a one-on-one meeting with an iland staff member, please contact Kim Howard at firstname.lastname@example.org to set up a mutually convenient date and time.
Watch Dante Orsini, SVP Business Development at iland who will be making his inaugural appearance during the Opening Ceremony that kicks off Partner Exchange on Tuesday, February 26.
You may also want to attend the session on Cloud Capitalization Strategies to Quickly Grow Your Existing Practice presented by Dante later that same day. The session will be held in the Sponsors’ Theater in the Exhibit Hall at 5.30pm and will cover iland’s Partner Program and the benefits it offers, in addition to reviewing several use cases implementd by iland and some of our partners.
Look forward to seeing you there!
Monday, February 11, 2013
These days most organizations with any form of IT investment are aware of or already using cloud resources. But for companies with a significant existing IT infrastructure, the notion of dropping their current investment and moving everything to the cloud is usually not realistic. Fortunately hybrid cloud options exist to fill in the gap between purely internal private clouds and outsourcing all IT to a public cloud. In this post we define public, private, and hybrid models and the potential benefits of each to your organization.
The concept of a “Public Cloud” usually involves some form of elastic/subscription based resource pools in a hosting provider datacenter that utilizes multi-tenancy. Resources include CPU, RAM, storage and bandwidth in a pool allocated for customer use. The term public cloud doesn’t mean less security, (iland public cloud offerings are within SSAE16/ISO27001 facilities and deployed with internal security guidelines that meet or exceed most customers’ internal security requirements), but instead refers to multi-tenancy. This means that customers can benefit cost-wise by utilizing the economies of scale of a larger infrastructure and enjoy the scalability of on-demand expansion or resized resource pools without having to order additional physical resources. The trade off? Public clouds often have reduced permissions for customers. This usually means not having direct ESXi/vCenter access, or direct access to the SAN manager. Fortunately technologies like VMware vCloud Director have abstracted many of these permissions and brought public cloud offerings even closer to private cloud permissions. The bottom line: public clouds are ideal for scalable and dynamic environments and have many if not all of the security features of a private model. The only drawback may be some reduced permissions or visibility due to their multi-tenant nature (although many advances in VMware products have improved this).
“Private Cloud” is normally used to describe a VMware deployment in which the hardware and software of the environment is used and managed by a single entity. Private cloud deployments allow for an organization to have full visibility to vCenter and ESXi hosts for greater control and utilization of technologies that require a high level of permissions. With these permissions and isolation typically come added costs. Users no longer benefit from native High Availability clustering or on-demand scalability without adding additional hardware. This can lead to higher overall costs and add complexity to rapid expansion.
So what’s a company to do? The concept of a “hybrid cloud” is meant to bridge the gap between high control, high cost “private cloud” and highly scalable, flexible, low cost “public cloud”. The concept revolves heavily around connectivity and data portability. Simply put, a “hybrid cloud” is the simultaneous usage of public and private cloud models to accomplish your organization’s goals. The use cases are numerous: resource burst-ability for seasonal demand, development and testing on a uniform platform without consuming local resources, disaster recovery, and of course excess capacity to make better use of or free up local consumption.
VMware has a key tool for “hybrid cloud” use called “vCloud Connector”. vCloud Connector is a free plugin that allows the management of public and private clouds within the vSphere Client. The tool offers users the ability to manage the console view, power status, and more from a “workloads” tab, and offers the ability to copy virtual machine templates to and from a remote public cloud offering. This level of control and flexibility makes the utilization of remote resources an intuitive experience.
For the majority of organizations today, the optimum solution for their many and varied use cases is hybrid cloud. However, because no two organizations are alike the extent of their use of public, private and hybrid clouds can differ substantially and it is up to each organization to determine the cloud model that is most appropriate for their business objectives and strategies.
Friday, February 1, 2013
When it comes to IT deployments effective planning is an essential part of the process and should not be underestimated. Planning is where discovery is made and changes implemented as a result. A VMware Site Recovery Manager (SRM) deployment is no exception. Successfully deploying SRM to the iland cloud requires proper planning and preparation and there are a number of steps that need to be followed:
Step 1: Understand the Requirements for SRM
SRM requires that VMware vSphere be configured at each of the protected and recovery sites:
- Each site must have at least one datacenter
- vSphere replication will be the mechanism used to replicate VMs from the customer’s site (protected site) to iland’s cloud (recovery site)
- The iland recovery site will have the hardware, network, and storage resources to support the same virtual machines and workloads as the customer’s protected site
- Validate that DNS lookups are working for the servers at both the protected and recovery sites
- The sites will be connected by a reliable IP network
- The protected and recovery sites must be paired before you can use SRM
The SRM server operates as an extension to the vCenter server. Because the SRM server depends on vCenter for some services, you must install and configure vCenter at both the customer site (protected site) and the iland cloud site (recovery site) before you install SRM.
Step 2: “Hardware Requirements”
Whether the “hardware requirements” are either physical or virtual, SRM requires these minimums as a starting point for the SRM server:
- Processor 2.0GHz or higher Intel or AMD x86 processor
- Memory 2GB
- Disk storage 2GB
- Networking Gigabit recommended
Step 3: Required vSphere Environment
- 2 vCenter Servers – one in the protected site and one in the recovery site
- 1-2 ESXi 5 servers at the recovery site
- SQL Server 2005 or 2008 – dedicated SQL server
Step 4: Licensing SRM
SRM is licensed by entering a valid license string. SRM has two different licensing models because it can be configured in two separate ways: unidirectional (active/standby) and bidirectional (active/active).
With unidirectional configuration, you will need an SRM license for the virtual machines protected by the SRM server at the protected site only. To enable bidirectional operation including re-protection, license keys must be installed at both the protected and recovery sites. SRM 5.0 is licensed on a per-VM basis.
SRM 5.0 is tested up to a maximum of 1,000 protected virtual machines per site. You can create a maximum of 500 Protection Groups and you can run up to ten Recovery Plans concurrently per site.
ESXi 5 host licensing is metered in the iland environment and is calculated by the average RAM used on each host
Step 5: Determining VMs to Be Protected
You’ll need to determine which VMs to protect. For example VMs with application-type services that need to be available to an organization at the time of a test or a disaster would need to be protected. Once you have identified which VMs to protect, you need to ensure that vSphere replication to the iland cloud is correctly configured for those protected virtual machines.
Step 6: Mapping Inventory Objects
SRM provides a simple way to map inventory objects from the resources at the primary site to the resources available at the recovery site in the iland cloud. Inventory objects include VM folders, network connection and compute resources. SRM automates the process of mapping these inventory objects in the protected site to the recovery site.
Finally, I recommend validating that DNS lookups in the protected and recovery sites return the corrected results. In order to validate the lookups you’ll need to confirm DNS on both vCenter servers, both SRM servers and all ESXi systems hosting protected VMs.
Click here to see a video on configuring your SRM environment