White Paper: Data Center Consolidation | NETSCOUT

White Paper: Ensuring the success of data center consolidation

The business case for data center consolidation is compelling, and not just for large, multinational corporations. Small and mid-sized companies have become excellent candidates for data center consolidation as well. Several factors are responsible for the surging interest in consolidation. Network infrastructures, applications, and services continue to rise in complexity. Users are becoming increasingly mobile and demanding; they expect a level of performance that will enhance their productivity, regardless of the task at hand and where they are attempting to accomplish it. And the global economic climate is mandating that firms identify ways to reduce their capital and operating expenditures without sacrificing quality.

    TABLE OF CONTENTS
  • Executive summary
  • Why embark on a data center consolidation project?
  • Hurdles to overcome.
  • Performance management tool essentials
  • TruView™ Appliance, a true unified platform
  • Infrastructure data monitoring.
  • Achieving and measuring data center consolidation success.

Executive summary

While the prospects and benefits of data center consolidation may be appealing, businesses cannot afford to simply dive in to the effort headfirst. Numerous challenges lie in wait that can derail the consolidation project before it arrives at its desired destination. Furthermore, companies must approach consolidation with a strategic, long-term perspective. Otherwise, the short-term gains will turn out to be little more than fool's gold. Taking a long-range viewpoint requires obtaining an in-depth understanding of current network, application and service performance; planning for and executing the transition to a new operating environment; and aggressively monitoring and managing the updated architecture to ensure it is continually achieving the benchmarks and metrics necessary for success now and in the future.

Effective, persistent management of the consolidated data center is the key to unlocking and optimizing the return on investment for the effort. Unfortunately, it is also the most difficult step in the process, because legacy performance management tools such as application flow monitoring, transactional views, packet analysis, SNMP polling and stream-to-disc (S2D) archiving require multiple platforms and mitigate the advantages available as a result of consolidation. What businesses need is a solution with the scalability, breadth and depth to acquire, integrate, retain, and present information that truly reflects the performance – from the operations, IT, and end-user perspectives – of the networks, applications, and services executing within the consolidated data center.

Cost

The shrunken footprint that accompanies data center consolidation positions firms to reduce their costs, with respect to both capital expenditures and operating expenditures. On the capital side of the equation, a consolidated environment means a smaller network and applications infrastructure. As a result, businesses need less hardware, including fewer servers, switches, routers, and other equipment. A reduction in servers likely translates to fewer required instances of software applications, allowing companies to further cut their capital budgets.

With the proliferation of cloud based Saas options, data center consolidation presents a host of opportunities to slash operating expenditures. Remote facilities either can be eliminated or pared to a fraction of their original size, generating savings in leasing costs. Connectivity to these facilities can also be scaled back, minimizing transport costs while also setting the stage for improved oversight of service provider relationships and performance. The consolidated infrastructure consumes less power and is easier to cool, reducing utility bills and paving the way for "green" initiatives that are rapidly gaining traction. Perhaps most significantly, consolidation eases the burden on IT and operations personnel. With less activity at remote locations, the management and communications requirements for those sites drop dramatically. Consequently, IT and operations staff have the potential to isolate and resolve problems faster and at less expense, freeing these resources to address higher priority tasks that are the most business critical.

Optimization

As networks, applications, and services grow more complex and users expect to conduct unified communications without a compromise in functionality or performance, a company's distributed legacy infrastructure is hard-pressed to withstand the strain. Toss in the occasional corporate merger or acquisition that expands the enterprise and ratchets up network and application disparity and the situation borders on untenable.


Figure 1: Data center consolidation is necessary not only to simplify the infrastructure, but to optimize it so quality of service can be maintained and ultimately improved

Consolidation promotes several avenues to optimization. One is the aforementioned transport. With a more centralized approach, there are fewer pipes to monitor, the architecture is more straightforward and easier to control, and traffic patterns and volumes are more visible and clearly defined. This environment offers the option to implement more advanced protocols and management strategies that maximize bandwidth utilization and performance of the overarching network and its applications.

Data center consolidation also goes hand-in-hand with application virtualization. The objective of application virtualization is to segregate applications from servers. Instead of running on a physical server with which it is colocated, an application executes on a virtual server that can reside anywhere in the enterprise, such as in the consolidated data center. As a result, fewer physical servers are needed, because each handles many applications, each of which performs as if the server were dedicated to it. When properly planned and maintained, the adoption of shared services is transparent to the end users of the applications, yet delivers a more manageable quality of service. The benefits of application virtualization are so compelling that a survey by consulting firm EMA estimates nearly three-quarters of all enterprises employ virtualization for at least some of their production applications.

Thanks to data center consolidation, automation of business-critical processes and systems is a realistic option. Automation solutions in the data center, for example, can restart failed applications, dynamically allocate new servers, conduct scheduled backups and perform configuration management of the operating environment. Automation brings a number of advantages, including process consistency and enforcement of corporate rules and regulations, accelerated process execution, and minimization of human error. It also allows for more efficient adaptation to changing conditions, and it increases the productivity of the IT and operations teams whose manual input and support for automated processes and systems is no longer required.


Security

The more widely distributed the footprint of networks, applications, and services, the more susceptible and vulnerable it is to security breaches. Data center consolidation provides immediate fortification while laying the foundation for implementation of more sophisticated ongoing risk mitigation strategies.

Consolidation means some locations will be eliminated altogether, and others will see reductions in size and scope. With fewer sites and assets to manage, the task of physically securing the enterprise becomes far easier, less costly, and requires fewer resources. Similarly, a more compact enterprise architecture relies on fewer connections between facilities. Simplifying transport layer connectivity sharpens the focus on, and effectiveness of, information security. Improving electronic security for transport is vital, because the proliferation of technologies such as multiprotocol label switching and the convergence of voice, data and video are transforming the role of transport. It is no longer merely a connection between a pair of points to transfer information; it has become an integrated piece of an application-aware infrastructure that is crucial to fulfilling the promise of improving service levels without sacrificing security.

In today's highly competitive global economy, corporate success is directly correlated to the availability and response time of networks, applications and services. When any of these are compromised, whether due to a security breach or other event, it is essential to restore performance as rapidly as possible, regardless of the severity and impact of the problem. Therefore, disaster recovery becomes a top priority for businesses. With data center consolidation, the planning, implementation, and execution of disaster-recovery solutions are less daunting tasks because all of the vital components are colocated, easing replication and failover initiation.

Compliance

Regardless of the industry within which a company operates, compliance is growing in importance. Business units and employees need to demonstrate that they are adhering to corporate policies and procedures. Firms must prove they are in alignment with government regulations such as Sarbanes-Oxley; they must also comply with private sector regulations, such as those issued by the payment-card industry (PSI-DSS), to win new contracts, be eligible to utilize services or keep their doors open. IT personnel have to show that hardware and software components purchased or developed in-house meet industry standards like ITIL. And IT and operations personnel must be able to track compliance with service-level agreements, both internally with business unit stakeholders and externally with partners and customers.

Data center consolidation facilitates compliance on at least two fronts. First, it promotes process and system automation, which takes the human out of the loop and encapsulates the procedures and functions that must be executed to remain in lockstep with relevant policies, regulations, standards and quality of service metrics. Second, it encourages the implementation of a comprehensive auditing capability that allows for the conclusive demonstration of operational compliance at a snapshot in time or over a longer window of time.

Compliance Category Definition Example Consolidation Impact/ Benefit
Government Federal, state and local regulations companies must follow to acquire and retain business Sarbanes-Oxley, representations and certifications Reduced hardware/software footprint eases comprehensive auditing and inventory management
Industry Regulations issued within and across vertical market sectors that affect access to and delivery of services   Reduced hardware/software footprint provides greater physical and information security, as well as improved audit and performance tracking
Corporate Policies and procedures issued by firms to promote efficiency and business ethics Website access; e-mail usage Application virtualization paves the way for monitoring all transactions across the enterprise
Technology Hardware and software implementation, deployment, and performance best practices and standards ITIL Centralized hardware/ software infrastructure and operations empowers IT to adopt and adhere to standards
Service Level Agreements Performance and quality of service guarantees issued by third-party partners and providers Bandwidth vailability, VoIP mean opinion scores Smaller transport footprint facilitates performance tracking in real time and over time
Service Level Metrics Internally-facing performance and quality of service guarantees issued to business units Application availability, application response times Application virtualization promotes gathering transaction forensics for performance monitoring and usage assessment

Table 1 highlights key compliance categories and the impacts of data center consolidation.


Hurdles to overcome

The advantages of data center consolidation are clear and compelling. Before making the commitment to a consolidation effort, however, companies must understand that the transition will not be easy. According to the Forrester survey, consolidation projects typically take 18-24 months to complete². During this time, firms will have to dedicate resources and budget to arm staff with the hardware and software components they need to assess the current operating environment, plan the migration and bring the new architecture online. Along the way, and even after deployment, businesses must be prepared to address a variety of obstacles that can threaten consolidation success. These challenges generally can be grouped into the following three areas: personnel, reporting and tools.

Personnel

One of the key points of data center consolidation is to bring as much of the networks, applications, and services infrastructure as possible together under one roof. As a consequence, IT and operations teams that previously were distributed and functioning in their own domains are now likely to be working side by side. In this scenario, cross-domain diplomacy, at a minimum, is required to ensure smooth operation under nominal conditions, and efficient problem resolution when anomalies arise. To fully reap the benefits of consolidation, diplomacy isn't sufficient. Cross-domain proficiency allows businesses to capitalize on the consolidation by engaging teams of interchangeable parts, which optimizes operations and staff productivity under all conditions. Because of the restrictions of the legacy architecture, cross-domain expertise is almost certain to be lacking, and must be cultivated as part of the migration effort.

Data center consolidation also means the role of the data center manager will change, and firms must account for this fact as they plan the transition. In most distributed enterprises, the narrowly targeted function of the data center requires a technically-focused manager. In a consolidated world, the purview of the data center is far broader, touching much more of the business. The data center manager needs a skill set commensurate with this reality. To bring all parties together effectively, the manager must be a diplomat who possesses not only technical prowess, but also experience in marketing, financial management and operations planning. Finding an individual with these capabilities is essential, but difficult, because few people have such a wide range of skills.

Reporting

With data center consolidation, resources that once spanned the enterprise are gathered into a common pool. As a result, business units that once managed and maintained their own networks, applications and services may no longer be able to do so. In exchange for relinquishing control, these groups are going to demand more than just the common benefits that accompany consolidation. They are going to expect deep visibility into consolidated data center operations, which translates into a requirement for a robustness of reporting that often is lacking.

In essence, business units are internal customers of the consolidated data center. For business unit owners to continue to support the operation, they must be assured that their critical applications are performing at or above the levels they did when the business unit controlled them. In other words, internally facing service-level agreements, or service-level metrics, must be defined and established between the consolidated data center and the business units. These metrics, such as application availability and end-user response time for transactions between the desktop and the data center, when compiled, tracked, and regularly reported, provide the evidence necessary to keep business unit owners on board.

Service-level metrics are just one important facet of reporting. Another is tied to usage and billing. Business unit owners will only want to pay for the resources they actually use; they will not want to subsidize the activities of other business units by paying an evenly divided share of the consolidated data center's costs. Reporting functionality thus must be enhanced to include usage assessment and corresponding chargeback/billback for all networks, applications and services consumed by each business unit.

Tools

Data center consolidation and the application virtualization that is likely to accompany it may streamline the enterprise architecture, but they also introduce complexity with regard to managing it. As more services become virtualized, it becomes increasingly difficult to provide a single view of application usage from data enter to desktop, because a single physical server can power multiple virtual machines. With database servers, application servers, e-mail servers, print servers, and file servers all potentially sharing the same piece of hardware, tracking network, application, and service performance is a tall order. The additional layer of abstraction inherent to application virtualization adds to the challenge because there usually is less physical evidence available than in traditional environments in which servers and applications are tightly coupled.


Companies face additional obstacles when it comes to effectively monitoring and managing network, application and service performance in a consolidated, virtualized world, above and beyond those tied directly to the architecture. The vast majority of legacy performance-management tools function best when they operate in a silo, focusing on a specific application, service, or geographical or logical slice of the network. Such an approach may be acceptable in a distributed architecture, but is a recipe for trouble in the consolidated data center, where the number of silos will grow with the inclusion of application virtualization management tools have not yet been integrated with the legacy performance management tools.

The result is a scenario operations personnel must conduct "swivel chair" management, relying on a set of disparate tools – each with its own unique capabilities and user interface – and their collective experience and expertise to manually correlate information to identify, isolate, and resolve problems. Best case, performance management is executed much as it was in the distributed environment, bypassing the opportunity to capitalize on colocated information and personnel. Worst case, the various factions of the operations and IT teams fail to peacefully coexist in the consolidated data center, raising the frequency and intensity of accusatory finger-pointing, while lowering the efficiency of anomaly resolution, to the chagrin of internal and external constituents – and corporate management.

Performance management tool essentials

Clearly, the status quo with respect to performance-management tools cannot remain in place if companies are to unlock the full potential of data center consolidation and reap all of its cost, optimization, security and compliance benefits. Legacy performance management tools weren't designed for the consolidated environment and don't account for the nuances and complexities that accompany it, such as application virtualization. What firms need is a next-generation performance-management solution that not only addresses the shortcomings of its predecessors, but also helps neutralize all other consolidation-based challenges, including personnel and reporting issues. Next-generation tools must account for the following three critical characteristics: scope, perspective, and timing.

Scope

Traditional performance management tools fall into one of two camps when it comes to capabilities and purview. One class of tools takes a high-level, broader tack that skims the surface in its data gathering and assessment, with the objective of providing executive dashboards that can be shared with senior management to track overall performance. The other set of tools takes a narrower, deeper dive that focuses on a particular segment of the enterprise, capturing packets, examining individual transactions, and delivering detailed, real-time analytics.

Ideally, IT teams need a multi-dimensional perspective for a complete view. Flow, transactional and SNMP data examines the overall experience while packet analysis and S2D capabilities assists in troubleshooting and compliance. IT organizations need both the breadth and depth in analysis but cannot afford the time and effort associated with disjointed point products.

Perspective

Legacy performance management tools are limited not only by what information they make available, but also by how they present it. Network and application viewpoints are necessary to identify the root cause of a problem and resolve it, but not always sufficient, particularly in a consolidated data center in which business unit owners are keeping a close eye on service-level metrics. Unfortunately, legacy tools typically offer no additional viewpoint alternatives, and that hampers the speed and accuracy of the identification/isolation/resolution process.

When an internal business user or external customer reports unacceptably slow application response times, for example, the ideal method to confirm the situation and diagnose the problem is to share the experience. Next-generation performance-management solutions must allow operations and IT staff to view the world from the end-user's perspective, a capability that becomes feasible thanks to the aforementioned expanded scope requirement.

Timing

In a perfect universe, performance management is straightforward. When problems arise, they are easily detected, the source of the anomaly is obvious, and the trouble is rapidly rectified, never to return. The consolidated data center, however, isn't utopia, and monitoring and managing network, application, and service performance isn't so simple. In many instances, performance degrades slowly over time, or problems come and go intermittently.

Gathering performance information from all data sources across the entire enterprise and presenting that information from the end-user's perspective set the stage to successfully address more sophisticated anomalies, but only if all of the information remains available for analysis over an extended time period. Legacy performance-management tools either don't acquire all of the necessary information or they discard information too quickly.


Next-generation performance-management solutions must be capable of obtaining and storing the most granular information for a meaningful, extended duration. Doing so empowers operations and IT to conduct real-time analyses, as well as to go back in time to discrete points in an effort to assess and correlate environments associated with intermittent trouble reports. It also promotes the development of nominal performance baselines over the short-, medium- and long-term, so deviations can be identified and addressed as early as possible as metrics pass through a series of increasingly severe degradation thresholds

TruView – A True Unified Platform

The good news for firms that have completed, are in the midst of or are planning data center consolidation projects is that the next-generation performance management solution isn't just a pipedream. A state-of-the-art answer is available today: TruView from NETSCOUT. TruView is a 100% Web-based, fully URL controllable, platform whose components, displays and reports can be integrated and customized to meet an organization's precise requirements.


Figure 2: TruView's application-performance dashboard allows you to quickly see the worst performing applications, servers and sites without a single mouse click

An intergrated TruView server is the heart of the unified platform. The server hosts a common platform where service and data model definitions can be rationalized. The platform is comprised of an analytics engine and baselining, alarming, notification and configuration elements that are the foundation of the solution. With this server architecture, all functionality, information and displays, regardless of source, can be seamlessly intertwined, promoting cross-domain interaction and rapid data correlation for problem identification, isolation and resolution.

The patented workflow IntelliTrace enhances the analysis and troubleshooting by providing easy identification of the problem domain and root cause analysis with a few clicks of the mouse. TruView provides a single platform with the most robust data collection, analysis and presentation engine.


A layer of common and/or custom access, controls and views rests atop the platform. This layer of TruView's architecture is responsible for interfacing with product components that gather and retain performance data at unparalleled breadth and depth from across the enterprise. Four types of components are native to the solution: network flow appliances, which enable network performance and usage views; application performance appliances, which provide the input for application performance views; Analysis Service Elements (ASE) probes, which support wide area network and voice over Internet protocol views and S2D hardware that can capture 100% of packets up to 10Gbps line speeds. Thanks to TruView's Web-based design, companies also have the option to integrate some or all of their legacy tools into the architecture to leverage their prior investments.

Infrastructure data monitoring

Figure 3: TruView's real-time and historical reports


TruView network flow appliances capabilities interact with the existing infrastructure of routers and switches to obtain flow-based information in any format, including but not limited to IPFIX and Cisco's NetFlow. These flow-based capabilities acquire all of the information from all of the flows all of the time, keeping real-time data at millisecond resolution indefinitely. Because the solution doesn't resort to data averaging or discarding, it isn't limited to assessments that focus only on the top statistics, which may not offer the granularity necessary to support important capabilities such as rogue user discovery, multicast visibility and peer-to-peer analysis.

Operations and IT staff can adjust the appliance's data retention and granularity variables to best suit their particular needs. In tandem with the comprehensive flow coverage the network flow appliances deliver, this enables users of TruView to sweep and swoop from high-level summary views right down to perspectives of individual flows. These full-flow forensics, displays and reports – in real time and over time – empower operations and IT teams to address and plan for scenarios that are relevant in a consolidated data center, including path and session optimization, bandwidth requirements validation, and class-of-service and MPLS network performance management.

Application performance management

Figure 4: With the granularity down to one minute of information, displaying end-user response time with application, network and server components, an infrastructure team can quickly pinpoint the problem domain, eliminating finger-pointing between the groups


By utilizing a mirror port or tap to interface with a physical or virtual server, TruView's application performance appliances can access inline or spanned data associated with every application-based transaction, even for those applications that have been virtualized. The application performance appliance is armed with proprietary, patented technology that allows all information to be captured, filtered (discarding duplicate or irrelevant packets), and stored. Summary information is forwarded to the TruView Manager server every 60 seconds in support of high-level application performance views. On demand, as solution users embark on diagnostic activities, the appliance sends increasingly detailed data, right down to individual transactions, for real-time events and transactions that occurred at a prior moment or segment in time.

Optional Dedicated hardware collectors

Figure 5: TruView's network-performance summary

TruView's Analysis Service Elements (ASE's) are devices that can be placed either at the consolidated data center or at any remote location across the enterprise's wide area network to provide the desired breadth of information. Individual analysis service elements are designed to capture and return to the TruView server a depth of information that spans layers 1-7 of the network model, from the physical layer to the application layer.

Analysis service elements are ideal for situations in which more stringent visibility and performance criteria are in play, such as physical layer error detection and service level agreements tied to network, server, or application availability. They also are well-suited to environments such as consolidated data centers, where businesses want to take maximum advantage of the convergence of voice, data and video traveling over the same transport to reduce costs and optimize bandwidth without sacrificing performance. Successful voice over Internet protocol deployments, for instance, require a thorough understanding and projection of traffic type and service distribution prior to, during and after the transition from traditional circuit-switched voice. Analysis service elements forward the information that feeds views and reports that support voice over Internet protocol assessments, active or passive performance monitoring, per-call quality measurements, and real-time or back-in-time troubleshooting


Robust stream-to-disk

The TruView server provides wire-speed rated packet collection up to 10 Gbps which insures IT teams will never miss an important event that occurs. Too often, intermittent problems are much more difficult to solve because of limited or missing data, but TruView's S2D makes sure you have everything you need at your fingertips. By capturing all data at line speed, there are no longer gaps in information.

Instead of making assumptions or storing on tidbits of data, TruView's S2D capabilities stores all the flows, transactions and packets with no limiting or pruning. The S2D is valuable in troubleshooting problems faster and lessening the time needed for auditing and compliance requirements. The correlated TruView platform provides easy access to the right information when the problem occurred – whether IT teams are troubleshooting real-time issues or are trying to identify the culprit of an intermittent issue that occured hours or days ago.

Achieving and measuring data center consolidation success

The verdict on the success or failure of a data center consolidation project should not be based on a short-term, subjective or qualitative assessment by individual stakeholders. Instead, firms must rely on quantitative statistics and metrics computed over the long term that account for the impacts to all constituencies – business unit owners, IT and operations staff, corporate management, and customers.

The Forrester Research survey of 147 U.S. enterprises that had completed or were actively executing a data center consolidation effort asked those companies to identify the top five metrics they were using to measure consolidation success. 52% of those surveyed cited operational cost in their top five list, followed closely by total cost of ownership at 44%, percent of IT budget saved at 38%, application performance versus infrastructure cost at 35%, and performance per CPU core at 34%.3

The deployment of a performance-management tool not only aids in the collection of these statistics, but it also directly affects their value. The right performance-management solution enables small, medium and large businesses to realize the cost, optimization, security and compliance benefits of data center consolidation to their fullest extent. At the same time, such a solution protects firms against the challenges that often accompany consolidation, including personnel, reporting and tool-related issues.

Legacy performance-management tools simply aren't up to the task because they lack the necessary scope, perspective and timing to deliver in a consolidated environment. There's too much riding on data center consolidation to ignore the performance-management function or entrust it to a set of inadequate, disparate tools that collectively fail to account for all network, application and service performance requirements. Businesses need a next-generation performance management solution with the architecture, breadth, depth, scalability, functionality and common data model to propel the consolidation project to success today and tomorrow. Only one product delivers the next-generation performance management essentials: TruView from NETSCOUT.

1 Cost analysis and measurement help ensure consolidation success Forrester Research, January 2009.

2 Cost analysis and measurement help ensure consolidation success, Forrester Research, January 2009.

3 Cost analysis and measurement help ensure consolidation success. Forrester Research, January 2009.