High Performance JavaScript
Nicholas C. Zakas
Published by Yahoo Press
Beijing Cambridge Farnham Kln Sebastopol Tokyo
This book is dedicated to my family, Mom, Dad, and Greg, whose love and support have kept me going through the years.
Special Upgrade Offer
If you purchased this ebook directly from oreilly.com, you have the following benefits:
DRM-free ebooksuse your ebooks across devices without restrictions or limitations
Multiple formatsuse on your laptop, tablet, or phone
Lifetime access, with free updates
Dropbox syncingyour files, anywhere
If you purchased this ebook from another retailer, you can upgrade your ebook to take advantage of all these benefits for just $4.99. to access your ebook upgrade.
Please note that upgrade offers are not available from sample content.
Preface
When JavaScript was first introduced as part of Netscape Navigator in 1996, performance wasnt that important. The Internet was in its infancy and it was, in all ways, slow. From dial-up connections to underpowered home computers, surfing the Web was more often a lesson in patience than anything else. Users expected to wait for web pages to load, and when the page successfully loaded, it was a cause for celebration.
JavaScripts original goal was to improve the user experience of web pages. Instead of going back to the server for simple tasks such as form validation, JavaScript allowed embedding of this functionality directly in the page. Doing so saved a rather long trip back to the server. Imagine the frustration of filling out a long form, submitting it, and then waiting 3060 seconds just to get a message back indicating that you had filled in a single field incorrectly. JavaScript can rightfully be credited with saving early Internet users a lot of time.
The Internet Evolves
Over the decade that followed, computers and the Internet continued to evolve. To start, both got much faster. The rapid speed-up of microprocessors, the availability of cheap memory, and the appearance of fiber optic connections pushed the Internet into a new age. With high-speed connections more available than ever, web pages started becoming heavier, embedding more information and multimedia. The Web had changed from a fairly bland landscape of interlinked documents into one filled with different designs and interfaces. Everything changed, that is, except JavaScript.
What previously was used to save server roundtrips started to become more ubiquitous. Where there were once dozens of lines of JavaScript code were now hundreds, and eventually thousands. The introduction of Internet Explorer 4 and dynamic HTML (the ability to change aspects of the page without a reload) ensured that the amount of JavaScript on pages would only increase over time.
The last major step in the evolution of browsers was the introduction of the Document Object Model (DOM), a unified approach to dynamic HTML that was adopted by Internet Explorer 5, Netscape 6, and Opera. This was closely followed by the standardization of JavaScript into ECMA-262, third edition. With all browsers supporting the DOM and (more or less) the same version of JavaScript, a web application platform was born. Despite this huge leap forward, with a common API against which to write JavaScript, the JavaScript engines in charge of executing that code remained mostly unchanged.
Why Optimization Is Necessary
The JavaScript engines that supported web pages with a few dozen lines of JavaScript in 1996 are the same ones running web applications with thousands of lines of JavaScript today. In many ways, the browsers fell behind in their management of the language and in doing the groundwork so that JavaScript could succeed at a large scale. This became evident with Internet Explorer 6, which was heralded for its stability and speed when it was first released but later reviled as a horrible web application platform because of its bugs and slowness.
In reality, IE 6 hadnt gotten any slower; it was just being asked to do more than it had previously. The types of early web applications being created when IE 6 was introduced in 2001 were much lighter and used much less JavaScript than those created in 2005. The difference in the amount of JavaScript code became clear as the IE 6 JavaScript engine struggled to keep up due to its static garbage-collection routine. The engine looked for a fixed number of objects in memory to determine when to collect garbage. Earlier web application developers had run into this threshold infrequently, but with more JavaScript code comes more objects, and complex web applications began to hit this threshold quite often. The problem became clear: JavaScript developers and web applications had evolved while the JavaScript engines had not.
Although other browsers had more logical garbage collection routines, and somewhat better runtime performance, most still used a JavaScript interpreter to execute code. Code interpretation is inherently slower than compilation since theres a translation process between the code and the computer instructions that must be run. No matter how smart and optimized interpreters get, they always incur a performance penalty.
Compilers are filled with all kinds of optimizations that allow developers to write code in whatever way they want without worrying whether its optimal. The compiler can determine, based on lexical analysis, what the code is attempting to do and then optimize it by producing the fastest-running machine code to complete the task. Interpreters have few such optimizations, which frequently means that code is executed exactly as it is written.
In effect, JavaScript forces the developer to perform the optimizations that a compiler would normally handle in other languages.
Next-Generation JavaScript Engines
In 2008, JavaScript engines got their first big performance boost. Google introduced their brand-new browser called Chrome. Chrome was the first browser released with an optimizing JavaScript engine, codenamed V8. The V8 JavaScript engine is a just-in-time (JIT) compilation engine for JavaScript, which produces machine code from JavaScript code and then executes it. The resulting experience is blazingly fast JavaScript execution.
Other browsers soon followed suit with their own optimizing JavaScript engines. Safari 4 features the Squirrel Fish Extreme (also called Nitro) JIT JavaScript engine, and Firefox 3.5 includes the TraceMonkey engine, which optimizes frequently executed code paths.
With these newer JavaScript engines, optimizations are being done at the compiler-level, where they should be done. Someday, developers may be completely free of worry about performance optimizations in their code. That day, however, is still not here.
Performance Is Still a Concern
Despite advancements in core JavaScript execution time, there are still aspects of JavaScript that these new engines dont handle. Delays caused by network latency and operations affecting the appearance of the page are areas that have yet to be adequately optimized by browsers. While simple optimizations such as function inlining, code folding, and string concatenation algorithms are easily optimized in compilers, the dynamic and multifaceted structure of web applications means that these optimizations solve only part of the performance problem.
Though newer JavaScript engines have given us a glimpse into the future of a much faster Internet, the performance lessons of today will continue to be relevant and important for the foreseeable future.