This document is provided "as-is." Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it. Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred.
2011 Microsoft. All rights reserved.
Microsoft, Windows, Windows Server, Windows Vista, Windows Azure, Windows PowerShell, Silverlight, Expression, Expression Blend, MSDN, IntelliSense, IntelliTrace, Internet Explorer, SQL Azure, SQL Server, Visual C#, Visual C++, Visual Basic, and Visual Studio are trademarks of the Microsoft group of companies.
All other trademarks are the property of their respective owners.
Energy use in the IT sector is growing faster than in any other industry as society becomes ever more dependent on the computational and storage capabilities provided by data centers. Unfortunately, a combination of inefficient equipment, outdated operating practices, and lack of incentives means that much of the energy used in traditional data centers is wasted.
Most IT energy efficiency efforts have focused on physical infrastructuredeploying more energy-efficient computer hardware and cooling systems, using operating system power management features, and reducing the number of servers in data centers through hardware virtualization.
But a significant amount of this wasted energy stems from how applications are designed and operated. Most applications are provisioned with far more IT resources than they need, as a buffer to ensure acceptable performance and to protect against hardware failure. Most often, the actual needs of the application are simply never measured, analyzed, or reviewed.
Once the application is deployed with more resources than it typically needs, there is very little incentive for the application developers to instrument their application to make capacity planning easier. And when users start complaining that the application is performing slowly, it's often easier (and cheaper) to simply assign more resources to the application. Very rarely are these resources ever removed, even after demand for the application subsides.
Cloud computing has the potential to break this dynamic of over-provisioning applications. Because cloud platforms like Windows Azure charge for resource use in small increments (compute-hours) on a pay-as-you-go basis, developers can now have a direct and controllable impact on IT costs and associated resource use.
Applications that are designed to dynamically grow and shrink their resource use in response to actual and anticipated demand are not only less expensive to operate, but are significantly more efficient with their use of IT resources than traditional applications. Developers can also reduce hosting costs by scheduling background tasks to run during less busy periods when the minimum amount of resources are assigned to the application.
While the cloud provides great opportunities for saving money on hosting costs, developing a cloud application that relies on other cloud services is not without its challenges. One particular problem that developers have to deal with is "transient faults." Although infrequent, applications have to be tolerant of intermittent connectivity and responsiveness problems in order to be considered reliable and provide a good user experience.
Until now, developers on Windows Azure had to develop these capabilities on their own. With the release of the Enterprise Library Integration Pack for Windows Azure, developers can now easily build robust and resource efficient applications that can be intelligently scaled, and throttled. In addition, these applications can handle transient faults.
The first major component contained within the Integration Pack is the Autoscaling Application Block, otherwise known as "WASABi." This application block helps developers improve responsiveness and control Windows Azure costs by automatically scaling the number of web and worker roles in Windows Azure through dynamic provisioning and decommissioning of role instances across multiple hosted services. WASABi also provides mechanisms to help control resource use without scaling role instances through application throttling. Developers can use this application block to intelligently schedule or defer background processing to keep the number of role instances within certain boundaries and take advantage of idle periods.
One of the major advantages of WASABi is its extensibility, which makes your solutions much more flexible. Staying true to the design principles of other application blocks, WASABi provides a mechanism for plugging in your own custom metrics and calling custom actions. With these, you can design a rule set that takes into account your business scenarios and not just standard performance counters available through the Windows Azure Diagnostics.
The optimizing stabilizer will ensure that you do not end up scaling too quickly. It can also make sure scale actions correspond to the most optimal compute hour pricing charges. For applications that expect significant usage beyond more than a few instances, this application block will help developers save money on hosting costs while improving the "green credentials" of their application. It will also help your application meet target SLAs.
The other major component is the Transient Fault Handling Application Block (also known as "Topaz") that helps developers make their applications more robust by providing the logic for detecting and handling transient fault conditions for a number of common cloud-based services.
More than ever before, developers have an important role to play in controlling IT costs and improving IT energy efficiency, without sacrificing reliability. The Enterprise Library Integration Pack for Windows Azurecan assist them in rapidly building Windows Azure-based applications that are reliable, resource efficient, and cost effective.
The Developer's Guide you are holding in your hands is written by the engineering team who designed and produced this integration pack. It is full of useful guidance and tips to help you learn quickly. Importantly, the coverage includes not only conceptual topics, but the concrete steps taken to make the accompanying reference implementation (Tailspin Surveys) more elastic, robust, and resilient.
Moreover, the guidance from the Microsoft patterns & practices team is not only encapsulated in the Developer's Guide and the reference implementation. Since the pack ships its source code and all its unit tests, a lot can be learned by examining those artifacts.
I highly recommend both the Enterprise Library Integration Pack for Windows Azure and this Developer's Guide to architects, software developers, administrators, and product owners who design new or migrate existing applications to Windows Azure. The practical advice contained in this book will help make your applications highly scalable and robust.
Mark Aggar, Senior Director
Environmental Sustainability
Microsoft Corporation
The Windows Azure technology platform offers exciting new opportunities for companies and developers to build large and complex applications to run in the cloud. Windows Azure enables you to take advantage of a pay-as-you-go billing model for your application infrastructure and on-demand computing resources.
By combining the existing Microsoft Enterprise Library application blocks that help you design applications that are robust, configurable, and easy to manage, with new blocks designed specifically for the cloud, you can create highly scalable, robust applications that can take full advantage of Windows Azure.
Next page