Monday, August 31, 2015

Book Review: JavaScript: The Good Parts

From the perspective of a book on a programming language that is frequently quoted with reverence by developers that regularly use that programming language, it could be argued that Java Script: The Good Parts is to JavaScript developers as Effective Java is to Java developers. The subtitle of Douglas Crockford's JavaScript: The Good Parts is "Unearthing the Excellence in JavaScript." I finally purchased and read JavaScript: The Good Parts (O'Reilly/Yahoo! Press, 2008) and this is my review of that book. This is more than a review, however, in that it also provides me a forum to highlight some of observations from that book that most interested me.

JavaScript: The Good Parts is a relatively short book with ten chapters and five appendices spanning fewer than 150 pages. It's impressive how much content can be squeezed into 150 pages and is a reminder that the best writing (in prose and in code) is often that which can say more in fewer words. I was able to read all of the chapters and the two appendices that interested me the most during a flight that took a little more than an hour (although my reading started as soon as I was seated on the airplane). There is one caveat to this: although some of JavaScript: The Good Parts is a very quick read for anyone with basic familiarity with JavaScript, other portions required me to re-read them or even tell myself, "I better come back to that later and read it again." This is another way in which this book reminds me of Effective Java.

Preface

The first page of the Preface is the core content of that section and it provides an overview of what a reader of JavaScript: The Good Parts should expect. The Preface describes JavaScript as "a surprisingly powerful language" which has some "unconventionality" that "presents some challenges," but is also a "small language" that is "easily mastered." One category of developer to which this book is targeted is "programmers who have been working with JavaScript at a novice level and are now ready for a more sophisticated relationship with the language." That sounds like me!

Crockford uses the Preface to describe what JavaScript: The Good Parts covers. He states, "My goal here is to help you learn to think in JavaScript." He also points out that JavaScript: The Good Parts "is not a book for beginners," "is not a reference book," "is not exhaustive about the language and its quirks," "is not a book for dummies," and "is dense."

Chapter 1: Good Parts

In the initial chapter of JavaScript: The Good Parts, Crockford points out that programming languages have "good parts and bad parts" and that "JavaScript is a language with more than its share of bad parts." He points out that these deficiencies are largely due to the short amount of time in which JavaScript was created and articulates, "JavaScript's popularity is almost completely independent of its qualities as a programming language." Crockford has found that a developer can write better programs in any language by only using the good parts of that language as much as possible. This seems to be particularly true with JavaScript. Crockford provides a high-level description of JavaScript's good parts:

"JavaScript has some extradordinarily good parts. In JavaScript, there is a beautiful, elegant, highly expressive language that is buried under a steaming pile of of good intentions and blunders."

In the section "Analyzing JavaScript," Crockford surveys the "very good ideas" that JavaScript is built upon along with the "few very bad" ideas that JavaScript is built upon. This first chapter is only 4 pages and the overview of these good and bad ideas is contained in a couple of pages. However, the remaining chapters of the book provide more details on the good parts and the first two appendices provide more details on the bad parts.

I agree with Crockford's assertion that a significant portion of the negativity and even hostility toward JavaScript is probably more appropriately aimed at the DOM. JavaScript has probably been accused of being non-standard and browser-specific millions of times when it is really the browser's DOM implementation that is non-standard and browser-specific.

I cannot finish my review of the first chapter of JavaScript: The Good Parts without quoting one more astute quote: "[Given JavaScript's] many errors and sharp edges, ... 'Why Should I Use JavaScript?' There are two answers. The first is that you don't have a choice. ... JavaScript is the only language found in all browsers. ... The other answer is that, despite its deficiencies, JavaScript is really good."

Chapter 2: Grammar

The second chapter of JavaScript: The Good Parts provides 15 pages of introduction to the "grammar of the good parts of JavaScript, presenting a quick overview of how the language is structured." For someone who has used JavaScript previously, much of this chapter may not be particularly insightful, though just seeing the parts of the language that Crockford feels are "good" is useful. The section on "Statements" points out early some "unconventional" aspects of JavaScript: code blocks delineated by curly braces do not limit scope to those blocks and variables should be defined at the beginning of a function rather than at first use.

Chapter 3: Objects

The 6 pages of Chapter 3 introduce JavaScript Objects. Significant aspects of JavaScript objects (key/value pair nature, prototype object association, pass-by-reference, objection inspection with typeof and hasOwnProperty, and reducing an object's "global footprint") are covered succinctly.

Chapter 4: Functions

Chapter 4 of JavaScript: The Good Parts begins with the statements, "The best thing about JavaScript is its implementation of functions. It got almost everything right. But, as you should expect with JavaScript, it didn't get everything right." This chapter is longer (20 pages) than the ones before it, reinforcing that Crockford believes functions are one of the really good parts of JavaScript. Despite its being lengthier than the preceding chapters, Chapter 4 seems to me to also be more dense (particularly than Chapters 2 and 3).

Chapter 4's coverage of JavaScript functions point out one of the differences in JavaScript I needed to come to terms with to feel more confident with the language: "Functions in JavaScript are objects." The section on function invocation briefly describes the four patterns of invocation in JavaScript (method invocation, function invocation, constructor invocation and apply invocation) and explains how this is initialized differently depending on the particular pattern of invocation used. JavaScript's different meaning of this depending on context has been one of the more difficult aspects of working with JavaScript after coming from a Java background, but this explanation is the clearest and most easy to remember that I have read.

The fourth chapter covers exception handling, method cascading, and type augmentation. The section on "Augmenting Types" presents multiple examples of adding "significant improvements to the expressiveness of the language" by "augmenting the basic types" via addition of methods to appropriate prototypes.

The sections on "Recursion," "Closure," and "Module" are where things got a bit dense for me and I needed to read several portions of these sections more than once to more fully appreciate the points being made. I believe I still have a ways to go to understand these concepts completely, but I also believe that understanding them well and implementing the module concept presented here is the key to happiness in large-scale JavaScript development.

The "Curry" section of Chapter 4 states that JavaScript lacks a curry method, but explains how to address that by associating a curry method with Function. The "Memoization" section demonstrates how to use memoization in JavaScript so that "functions can use objects to remember the results of previous operations, making it possible to avoid unnecessary work."

Chapter 5: Inheritance

JavaScript: The Good Parts's fifth chapter begins by briefly explaining the two "useful services" that inheritance provides in "classical languages (such as Java)": code reuse and type system. It is explained that JavaScript is dynamically typed and therefore gains a single advantage from inheritance: code reuse. Crockford states that "JavaScript provides a much richer set of code reuse patterns" than the "classical pattern."

The "Pseudoclassical" section of Chapter 5 begins with the assertion that "JavaScript is conflicted about its prototypal nature." There is in-depth discussion about the dangeris and drawbacks of using the constructor invocation pattern. The most "serious hazard" occurs when a developer forgets to use new when calling the constructor function. Crockford warns that in such cases, this is associated with the global object rather than the (likely) intended new object. The author states that convention is to use uppercase for the first letter of the "constructor function" objects" to indicate this risk, but he advises that the better course is to not use new or the constructor invocation pattern at all.

This discussion in the "Pseudoclassical" section of Chapter 5 provides more detail on issues Crockford raised with the "constructor invocation pattern" in Chapter 4. These two sections forced me to acknowledge that while I've liked using the constructor invocation pattern in JavaScript, it's only because "the pseudoclassical form can provide comfort to developers who are unfamiliar with JavaScript." Crockford warns that its use "hides the true nature of the language."

Chapter 5 introduces object specifiers and dives into coverage of JavaScript's prototypal implementation and differential inheritance. The "Functional" section of this fifth chapter illustrates how to use a functional approach to reuse and states that this functional approach "requires less effort than the pseudoclassical pattern and gives us better encapsulation and information hiding and access to super methods." The fifth chapter concludes with discussion and code example of composing objects "out of sets of parts."

Chapter 6: Arrays

The 6-page sixth chapter of JavaScript: The Good Parts introduces the concept of an array and mentions a couple of its benefits, but laments, "Unfortunately, JavaScript does not have anything like this kind of array." The author describes what JavaScript offers as "an object that has some array-like characteristics." He points out that this array-like object is "significantly slower than a real array, but it can be more convenient to use."

Chapter 6 discusses JavaScript's "unconventional" length property for JavaScript "arrays" and introduces syntax for accessing elements, push, and delete. Crockford points out that "JavaScript does not have a good mechanism for distinguishing between arrays and objects" and he provides two brief implementations of is_array functions (the second relies on toString() not being overridden).

The sixth chapter wraps up with discussion regarding adding methods to JavaScript's Array. Specific code examples include a function thar initializes the delements of a JavaScript array and a function that initializes the elements of a matrix (array of arrays).

Chapter 7: Regular Expressions

The nearly 23 pages of JavaScript: The Good Parts's seventh chapter focus on applying regular expressions in JavaScript. For those who have used other implementations of regular expressions (particularly Perl's or implementations based on Perl's), this will be fairly familiar.

Crockford points out several motivations for keeping regular expressions simple, but a JavaScript-specific motivation for simpler regular expressions that he cites has to do with lack of portability between different JavaScript language processors' regular expression support.

Chapter 7 introduces two forms of creating regular expressions in JavaScript: literals (/ syntax) and RegExp constructor. The chapter also introduces other JavaScript syntax for working with various regular expression concepts in JavaScript.

Chapter 8: Methods

The 15+ pages of Chapter 8 of JavaScript: The Good Parts feel like an API reference and reminds of me books such as Java in a Nutshell. These pages summarize the "small set of standard methods that are available on the standard types" in JavaScript. The chapter lists the method signature, brief method description, and examples of using that method for standard methods defined on Array, Function, Number, Object, RegExp, and String. Although these are nice summary descriptions and example usages, this chapter may be the least useful chapter of the book given that these APIs are documented online in sites such as the Mozilla Developer Network's JavaScript Reference.

Chapter 9: Style

JavaScript: The Good Parts's most (pleasantly) surprising chapter for me may be Chapter 9. When I was browsing the table of contents and saw "Style," I thought this chapter would be another bland spelling out of what to do and not do stylistically in code. I'm tired of these stylistic discussions. The chapter is fewer than 4 pages, so I did not expect much.

It turns out that the ninth chapter has some important observations in its just over three pages on style. I like that Crockford takes the reasons for style concerns with any programming language and emphasizes that they are particularly important in JavaScript.

My favorite part of Chapter 9 is when Crockford explains his style used in the book for JavaScript code. Some of it is the bland matter-of-taste stuff like number of spaces for indentation, but some of it is motivated by an understanding of JavaScript nuances and limitations. For example, Crockford states, "I always use the K&R style putting the { at the end of a line instead of the front, because it avoids a horrible design blunder in JavaScript's return statement." Similarly, he points out that he declares variables at the beginning of a function and prefers line comments over block comments because of other nuances of JavaScript. He (and my review) covers these more in the appendices.

This second-to-last chapter offers some poignant advice regarding coding style and large-scale JavaScript applications:

"Quality was not a motivating concern in the design, implementation, or standarization of JavaScript. That puts a greater burden on the users of the language to resist the language's weaknesses. JavaScript provides support for large programs, but it also provides forms and idioms that work against large programs.

Chapter 10: Beautiful Features

JavaScript's bad features are the focus of Appendix A ("Awful Parts" that "are not easily avoided") and Appendix B ("problematic features" that "are easily avoided"), but Crockford focuses Chapter 10 on what he considers JavaScript's "beautiful features." Because this is the theme of the book, this chapter only needs a bit over 2 pages to highlight Crockford's concept of "Simplified JavaScript": taking the "best of the Good Parts" of JavaScript, removing the features of the language with very little or even negative value, and adding a few new features (such as block scoping, perhaps the thing I miss most in JavaScript).

Appendix A: Awful Parts

Appendix A highlights the "problematic features of JavaScript that are not easily avoided" in just over 7 pages. Crockford warns, "You must be aware of these things and be prepared to cope."

The body of the appendix opens with an assertion that's difficult to argue with: "The worst of all of JavaScript's bad features is its dependence on global variables." I also like that Crockford points out that while many programming languages "have global variables," the problem with JavaScript is that it "requires them."

Appendix A also highlights why JavaScript's handling of reserved words, lack of block scope, 16-bit unicode support, typeof limitations, parseInt without explicit radix, confusion of + for adding or concatenating, "phony" arrays, and a few other features are problematic and how to avoid or reduce their use.

Perhaps the most interesting discussion for me in Appendix A is the explanation of why JavaScript may somtimes insert semicolons and, instead of fixing things, will make things worse (mask more significant code issues).

Appendix B: Bad Parts

The six pages of Appendix B "present some of the problematic features of JavaScript that are easily avoided." The chapter details why JavaScript features such as ==, with, continue, falling through switch, statements without blocks, bitwise operators, typed wrappers (and new Object and new Array), and void should be avoided.

Appendix C: JSLint

Appendix C provides 10 pages focused on JSLint, described as "a JavaScript syntax checker and verifier." About JSLint, Crockford states, "JSLint defines a professional subset of JavaScript ... related to the style recommendations from Chapter 9. JavaScript is a sloppy language, but inside it there is an elegant, better language. JSLint helps you to program in that better language and to avoid most of the slop."

Appendix C details how JSLint helps JavaScript developers identify global variables and functions, identify potentially misspelled members (used only once because misspelled but JavaScript itself won't report), identify missing or extraneous semicolons, identify potential issues of automatic semicolon insertion due to improper line breaking, and identify block statements missing opening and closing curly braces. Other items flagged by JSLint include fall-though switch statements, use of with, assignment operator used in a conditional expression, potential JavaScript type coercion with == and !=, eval, void, bitwise operators, potentially non-portable regular expressions, and constructor functions.

The chapter also demonstrates how to specify to JSLint the "subset of JavaScript that is acceptable." In other words, one can choose to not have certain conditions flagged by JSLint. I find it interesting that JSLint provides some HTML validation in addition to checking for well-formed JSON.

I have found that static code analysis tools for Java not only help improve existing Java code, but help me write better Java code in the future as I learn what is considered wrong or bad form, why it is wrong or frowned upon, and how to avoid it. The same is true for JSLint's effect on JavaScript; a person learning JavaScript can benefit from learning what JSLint flags to know the bad/ugly parts of JavaScript to avoid..

Appendix D: Syntax Diagrams

The fourth appendix consists solely of syntax diagrams that graphically indicate how various JavaScript constructs are syntactically constructed. The diagrams are of the portions of JavaScript highlighted in JavaScript: The Good Parts. Appendix D is a reference guide similar to Chapter 8 and, like Chapter 8, is probably the least valuable of the book's appendices because it is information that is readily available online.

Appendix E: JSON

The final ten pages of the book are in Appendix E and are dedicated to JavaScript Object Notation (JSON). The appendix describes JSON as "based on JavaScript's object literal notation, one of JavaScript's best parts." This introduction explains that JSON is a text-based format that is a subset of JavaScript but can also be used as a language-independent data transfer format. Most of the material in this appendix was obviously a lot newer to people in 2008 when this book was published than it is today because today many developers who don't even know JavaScript very well are aware of JSON.

Appendix F describes the syntax rules of JSON in approximately a single page because "JSON's design goals were to be minimal portable, textual, and a subset of JavaScript."

The section of Appendix F on "Using JSON Securely" looks at the risks of using JavaScript's eval to turn JSON into a useful JavaScript data structure and recommends use of JSON.parse instead. There is also interesting discussion on security implications of assigning an HTML text fragment sent by the server to an HTML element's innerHTML property. What makes this interesting is Crockford's pointing out that this security issue has nothing to do with Ajax, XMLHttpRequest, or JSON, but is rather due to the core JavaScript design flaw of featuring a global object. Crockford takes one more shot at this "feature": "This danger is a direct consequence of JavaScript's global object, which is far and away the worse part of JavaScript's many bad parts. ... These dangers have been in the browser since the inception of JavaScript, and will remain until JavaScript is replaced. Be careful."

The last 5 1/2 pages of Appendex F feature a code listing for a JSON parser written in JavaScript.

General Observations

  • JavaScript: The Good Parts deserves the praise and reverence heaped upon it; it is a great book and I cannot think of a JavaScript book that I've read that has done as much for my understanding of this unconventional language as JavaScript: The Good Parts.
  • Many technology books rave about the covered language, framework, or library and either don't acknowledge the deficiencies and downsides of the covered item or quickly explain them away as insignificant or inconsequential. JavaScript: The Good Parts is more effective because it doesn't do this. Instead, Crockford's writing makes it obvious that there are many aspects of JavaScript he likes and finds expressive, but that he also recognizes its downsides. His book is an attempt to teach how to mostly use only good parts of JavaScript and mostly avoid use of the bad parts of JavaScript.
  • Because Crockford takes time to explain JavaScript's unconventional features and distinguish between cases where the unconventional approach is "good" and cases where unconventional approach is "bad," readers of the book have a better opportunity to appreciate JavaScript's positives rather than mostly seeing its negatives.
  • JavaScript: The Good Parts reinforces the idea that trying to treat JavaScript like Java (or any other classically-object-oriented language) is a mistake. It explains why this approach often leads to frustration with JavaScript.
  • JavaScript: The Good Parts is a highly-readable and generally approachable book. The (English) language of the book is clear and well-written. The conciseness is impressive, especially considering that some of the book's most important points are made multiple times in different contexts and the entire book has fewer than 150 main pages.
    • Although JavaScript: The Good Parts is written in a very readable form, some portions of it are more difficult to read because the content is more difficult. This is particularly true in some of the sections in Chapter 4 on functions. Some of these sections required multiple readings for me, but they are also the sections that bring the most insight when understood.
    • One reason that JavaScript: The Good Parts can be as concise as it is has to do with it providing so little introduction. Those who have never coded before or have no JavaScript coding experience, will likely be better off reading a more introductory book or online resource first.
  • Several useful JavaScript code snippets are provided in JavaScript: The Good Parts to illustrate good and bad parts of JavaScript. Along the way, several pieces of code are provided that are generic and reusable and worth highlighting here:
    • Chapter 3 (page 22) provides 6 lines of code for associating a "create method" with the Object function that "creates a new object that uses an old object for its protype."
    • Chapter 4 (pages 32-33) provides 4 lines of code for making a named method available to all functions. A slightly revised version of this is presented one page later with the addition of a "defensive technique" to ensure that a newly defined method does not override one that another library is already using.
    • Chapter 4 (page 33) provides 3 lines of code for adding an integer to Number.prototype that "extracts just the integer part of a number."
    • Chapter 4 (page 33) provides 3 lines of code for adding trim method to String.prototype that "removes spaces from the ends of a string."
    • Chapter 4 (page 44) provides 8 lines of code for adding a curry method to Function.
    • Chapter 4 (page 45) provides 11 lines of code that implement a geeneralized funtion for generation of memoized functions.
    • Chapter 5 (page 54) provides 7 line of code that implement a superior method that "takes a method name and returns a function that invokes that method."
    • Chapter 6 (page 61) provides two brief implementations of is_array functions for determining if a given JavaScript item is an array.
    • Chapter 6 (page 63) provides an implementation of a dim method on arrays that initializes all elements of an array.
    • Chapter 6 (pages 63-64) provides an implementation of a matrix method on Array that initializes all elements of arrays nested within array.
    • Appendix F (pages 140-145) provides an implementation of a "simple, recursive decent [JSON] parser" to generate a JavaScript data structure from JSON text.
  • A book such as JavaScript: The Good Parts is necessarily opinionated (same applies to the excellent Effective Java). I like it in this case because it's not one-sided, rose-colored glasses opinions, but rather expresses opinions of both JavaScript's good and bad parts. Not all opinions are created equal. In this case, author Douglas Crockford brings great credibility to back his opinions. His involvement with JSLint and JSON alone speak volumes for his experience with and knowledge of JavaScript. Opinionated books written by inexperienced individuals are not likely to be very valuable, but an opinionated book by an experienced developer is often among the most valuable of technical books.

Conclusion

JavaScript: The Good Parts is one of those relatively rare technical books that is very hyped and lives up to that hype. It helps the reader to understand how to use the best parts of JavaScript and avoid or reduce exposure to the bad parts of JavaScript. In the process of doing this, it does help the reader to do exactly what the author is trying to accomplish: to think in JavaScript. JavaScript: The Good Parts condenses significant low-level details and important high-level language design discussion into fewer than 150 pages.

Friday, August 28, 2015

Book Review: JavaScript at Scale

JavaScript at Scale (Packt Publishing, July 2015) is written by Adam Boduch and has the subtitle "Build enduring JavaScript applications with scaling insights from the front-line of JavaScript development." JavaScript at Scale features ten chapters spanning approximately 240 substantive pages.

Preface

JavaScript at Scale's Preface includes a sentence or two describing each of its ten chapters. Readers of the book who want to work with the examples are encouraged to have Node.js, a code editor or IDE, and a "modern web browser" available while using this book.

The Preface states that the book is intended for "a senior JavaScript developer who is curious about architectural issues in the frontend." The Preface adds that "no prerequisite framework knowledge [is] required," but states that "strong JavaScript language skills are required." The Preface also points out more details on the nature of this book: "the concepts presented throughout the book are adaptations of components found in frameworks such as Backbone, Angular, or Ember."

These statements are important for developers considering purchasing and/or reading JavaScript at Scale as they provide an appropriate glimpse at what the author is aiming for with this book. The emphasis of JavaScript at Scale is definitely on higher-level design and architectural principles than on low-level implementation details (including use of specific frameworks and libraries in the category of low-level implementation details).

Chapter 1: Scale from a JavaScript Perspective

The initial chapter of JavaScript at Scale describes common scalability concerns and how they pertain to JavaScript. The chapter uses a hypothetical new generic JavaScript application as an example and talks about different scaling concerns that arise as more users use the application, as more features are added to the application, and as more developers are added to the development team.

Chapter 1 discusses architecture of components and communication between components for a browser-hosted application. The chapter discusses responsiveness, addressability, and configurability. There are several other discussion items in this chapter such as trade-offs that can be made in designing a JavaScript application's architecture, using frameworks and libraries, and taking advantage of communities.

Chapter 2: Influencers of Scale

The second chapter of JavaScript at Scale delves into a concept introduced in the first chapter: the influencers of scale. In looking specifically at scaling users, the chapter discusses business models for attracting customers, license fees, subscription fees, consumption fees, including advertisements in the product, using and being open source, communication between users, support mechanisms, allowing feedback and notifying users, and collecting user metrics.

Chapter 2 provides more discussion on subjects such as collecting metrics to determine which features to add to the application, dealing with too few to too many developers for the application development, and managing users.

JavaScript at Scale's second chapter concludes with presentation of scale influencer checklists. Several checklists are presented with several questions on each. These checklists tend to ask questions related to the topics discussed in the chapter and provide a way for development teams to start conversation on these potential influencers of scale.

Chapter 3: Component Composition

Chapter 3 of JavaScript at Scale is the first chapter that gets detailed enough to include code listings. The author `states, "Large-scale JavaScript code is still a relatively new thing." The chapter explains that JavaScript doesn't have module support and so popular approaches for dealing with JavaScript modules include RequireJS and Browserify (CommonJS). There is extensive discussion regarding routing logic as well as discussion on controllers and views, models and collections, templates, and extending generic components in creating application-specific components.

The third chapter also looks at how to map features to components and how to most appropriately deviate from desired generic component features when creating application-specific features. The chapter includes discussion regarding maintaining, debugging, and refactoring complex components. Other discussion covers organizing component code, implementing stateless functions, and extending versus configuring components.

Chapter 4: Component Composition and Responsibilities

The fourth chapter of JavaScript at Scale "focuses on the glue in between our JavaScript components." The chapter begins by looking at the two predominant communication mechanisms for communication between JavaScript components: message passing and event triggering. The chapter describes messaging considerations such as data size, naming conventions, and common data.

A particularly useful section of Chapter 4 describes implementing "traceable component communication" and describes three approaches for doing this. The chapter covers other considerations and approaches for communication overhead, areas of communication responsibility, loosely coupled communication, handling unexpected events, and component layers.

Chapter 5: Addressability and Navigation

JavaScript at Scale's fifth chapter opens by stating that URI processing "has shifted mostly to the frontend" in "large-scale JavaScript applications," but points out that the "benefits of frontend routing do come at a cost." The chapter then introduces "two approaches to routing in JavaScript": hash URIs and browser history API.

The fifth chapter spends some time looking at JavaScript routers, describing what they are, describing router responsibilities, describing router events, examining what should be in and not be in URIs, and describing manually and automatically associating URIs with resources. The router-heavy chapter continues with coverage of topics such as triggering routes, configuring routes, deactivating routes, troubleshooting routes, logging routes, and handling invalid resource states. Although most of this fifth chapter is discussion text like the earlier chapters of the book, there are a couple lengthy code listings in this chapter.

Chapter 6: User Preferences and Defaults

Chapter 6 of JavaScript at Scale opens with discussion of "three main types of preferences we're concerned with when designing large-scale JavaScript architectures": locales, behavior, and appearance. The chapter provides a brief overview of each of these three types of preferences before revisiting each of the three in significantly greater detail. The sixth chapter closes with a section on performance considerations related to preferences that includes a lengthy code listing.

Chapter 7: Load Time and Responsiveness

The seventh chapter of JavaScript at Scale points out that "Performance is the prominent indicator of quality in the eyes of a user." The chapter looks at JavaScript scale and performance in terms of load time and responsiveness. The chapter discusses use of different sizes of components to deal with network request overhead and developer overhead. The chapter also provides discussion on lazy loading modules and on advantages of using ECMAScript 6 modules over third-party module frameworks.

Chapter 7 also discusses scaling related to communication between components. There is discussion and a code listing explaining use of event brokers and indirection can be useful at first but then present a communication bottleneck with larger JavaScript applications. The chapter briefly describes a small number of JavaScript profiling approaches (such as using the browsers' console.profile() and console.profileEnd() and benchmark.js) that can be used to determine communication bottlenecks to be addressed.

Chapter 7's discussion on addressing issues with state includes an interesting discussion on use of purely functional code with no side effects to improve performance by reducing mutable state. There is also discussion on other topics such as improving performance related to DOM updates and increasing amounts of data to render.

Chapter 8: Portability and Testing

Chapter 8 of JavaScript at Scale begins with a discussion of advantages of loosening the coupling between front-end JavaScript and back-end JavaScript components. This discussion also outlines the advantages and costs associated with mocking data. The discussion looks at the traits desired in a mocking layer and looks at two approaches to mocking the backend. This mocking discussion is fairly extensive.

The eighth chapter also discusses unit testing and discusses how unit test tools built-in to popular JavaScript frameworks are related to framework-agnostic testing tools. The chapter then discusses advantages of "standalone unit testing tools" and emphasizes Jasmine. The chapter concludes with discussion on toolchains, integration testing, and end-to-end testing.

Chapter 9: Scaling Down

The ninth chapter of JavaScript at Scale discusses "scaling down from bloated design" and states, "The composition of our application as a whole determines how easy or how difficult it'll be to scale down by turning features off." The section on "JavaScript artifact size" states, "The biggest contributor to growing JavaScript artifact sizes are the new features we constantly add to our product." The section on "Network bandwidth" states, "The challenge is that any new feature is going to add new network bandwidth consumption." There is also discussion in this chapter on topics such as memory usage and garbage collection, CPU consumption, and backend scalability.

Chapter 9's discussion regarding scaling down also provides discussion on irrelevant features and overlapping features that often exist in older and larger JavaScript baselines. The chapter also discusses how customer demand and design failures relate to the need to scale down. The section on "unnecessary components" makes an interesting assertion: "The most common pattern flaw is unnecessary indirection." This section talks about issues that can arise when originally selected patterns no longer apply well and when the generic nature of frameworks' patterns do not fit well with the architecture.

"Inefficient data processing" is also described in this ninth chapter and the author describes how moving and transforming data between multiple components can lead to inefficiencies. The chapter finishes with discussions regarding "excessively creative markup" and "application composition." Chapter 9 is a discussion-rich chapter, but it also provides several code listings demonstrating how the conditions described in the chapter can occur that need to be scaled down.

Chapter 10: Coping with Failure

The final chapter of JavaScript at Scale provides the assertion, "As we scale our application, the approaches of how we deal with failures need to scale too." The chapter compares and contrasts failing fast and attempting to recover. The importance of quality constraints, meaningful feedback, fault tolerance, and disabling or even removing faulty components.

Chapter 10's coverage of fault recovery includes discussion regarding retrying failed operations, restarting components, requiring manual user intervention, and what to do when the application cannot be recovered. The chapter also looks at how addition of exception handling, state checking, intra-component notification, logging, and debugging can help deal with errors but also can lead to more scaling issues to consider. Like Chapter 9, Chapter 10 includes several code listings to illustrate its concepts.

The final paragraph of Chapter 10 also serves as the final summary paragraph for JavaScript at Scale. This paragraph articulates well in two sentences what the author hopes readers of this book get from the book: "To get the right answers, we first need to ask the right questions. I hope this book has equipped you with the requisite knowledge to formulate questions around scaling your JavaScript application."

General Observations

  • JavaScript at Scale emphasizes architectural and high-level design discussion related to scaling JavaScript applications rather than focusing on low-level implementation details (code, frameworks, libraries) that can be used to make JavaScript applications more scalable.
    • See my review of The Preface above for some of the author's comments that advertise this architectural emphasis in JavaScript at Scale.
    • See bullet below with details regarding author's blog post in which he further describes the architectural emphasis of JavaScript at Scale.
    • A prospective reader looking for a detailed account of exactly how to implement a one-size-fits-all scalable architecture will not find that elusive target here. A reader looking for general architectural principles to be considered when designing a JavaScript application so that it can scale in multiple directions and in response to multiple scalability "influencers" will be happier with this book.
    • See the last paragraph of my review of Chapter 10 above for more information on what the author advertised as the intention of this book. I believe that this book's strength is what he suggests: it can help JavaScript architects and developers come up with the questions they need to ask themselves to understand how to best implement and refactor a JavaScript application to be scalable.
  • The author of JavaScript at Scale has written a blog post that describes this book. The post articulates the focus of the book and includes this statement, "Architectural considerations get lost in all this choice [of JavaScript libraries and frameworks]." He adds that the information presented in JavaScript at Scale "is important information to have at our disposal when making architectural decisions, or when selecting a piece of technology to use in our application. With it, we can make informed choices about the scalability of our code, because code for one application is going to have different scaling characteristics than another application."
  • Although much of JavaScript at Scale is more discussion oriented than code-oriented, there are code listings in some of the chapters These listings feature black text on a white background even in the electronic edition (PDF) that I reviewed. There are no line numbers and no color syntax highlighting in the code listings. Most of the listings are small, but the longer ones can be more difficult to humanly parse without color coded syntax highlighting.
  • JavaScript at Scale is discussion-heavy and most of the writing is easy to follow. There are some typos (such as the use of "manor" when "manner" was intended), but these generally did not detract from the readability of the text.
  • The tone of JavaScript at Scale is generally like having a leisurely chat with a fellow architect about general JavaScript architectural questions and issues. At times, this was more verbose than I needed and I had to be careful not to skim too much because some really useful insights could be buried in the verbose prose. At other times, in areas in which I'm less familiar, the verbosity of text was helpful.

Conclusion

It is a common phrase in software development that there are no silver bullets and JavaScript at Scale doesn't try to sell that there is one when it comes to building scalable JavaScript applications. Instead, JavaScript at Scale identifies a myriad of issues for JavaScript architects and designers to consider in implementing and maintaining large-scale JavaScript applications. Although the readers of JavaScript at Scale are likely to be introduced to (or reminded of) some ideas to consider when making their applications more scalable, the author points out that there is really no way to summarize the approaches that work for all applications in all contexts. Instead, the author emphasizes the considerations that should be made in terms of several aspects of JavaScript application development and the possible approaches to deal with these different considerations as best fits each situation. Readers of JavaScript at Scale should be prepared to tailor what they're reading to their own applications' needs.

Friday, August 21, 2015

Book Review: Getting Started with Hazelcast - Second Edition

I recently purchased an electronic version of the Second Edition of Getting Started with Hazelcast as part of the recent Packt Publishing Skill Up Promotion. The book, written by Mat Johns, has the subtitle, "Get acquainted with the highly scalable data grid, Hazelcast, and learn how to bring its powerful in-memory features into your application." The book consists of eleven chapters and an appendix spanning approximately 120 pages.

Preface

The Preface of Getting Started with Hazelcast (Second Edition) provides brief descriptions of a sentence or two each summarizing the contents of the book's eleven chapters and its appendix. The Preface states that Getting Started with Hazelcast is intended for "Java developers, software architects, or DevOps looking to enable scalable and agile data within their applications." It recommends that readers have access to a JDK (at least JDK 6 but preferably JDK 8), a Java IDE, and Maven.

Chapter 1: What is Hazelcast?

The relatively short initial chapter of the Second Edition of Getting Started with Hazelcast introduces Hazelcast after first explaining the need for it with multiple simple graphics. The chapter describes Hazelcast as more than a cache and as an "in-memory data grid that supports a number of distributed collections, processors, and features."

Chapter 2: Getting off the Ground

The second chapter of Getting Started with Hazelcast explains and illustrates downloading Hazelcast from hazelcast.org/download/. The specific version downloaded and used for purposes of the Second Edition of Getting Started with Hazelcast is Hazelcast 3.5 (hazelcast-3.5.jar).

Chapter 2 demonstrates using ConsoleApp to use the command line to interact with simple Java applications using Hazelcast and to demonstrate Hazelcast's support for automatically supporting multiple clustered nodes. Code listings and associated explanatory text in this chapter introduce HazelcastInstance, working with an instance of distributed Map provided by HazelcastInstance.getMap(String), working with a distributed instance of Set provided by HazelcastInstance.getSet(String), working with a concurrent, distributed List instance provided by HazelcastInstance.getList(String), working with a concurrent, blocking, and distributed queue provided by HazelcastInstance.getQueue(String), and working with a distributed multimap instance (many values associated with a single key) provided by HazelcastInstance.getMultiMap(String).

Getting Started with Hazelcast's second chapter discusses "using predefined indexes" to search Hazelcast maps in a method similar to search capabilities provided by databases. This section provides an example of using SqlPredicate and lists the "limited subset of SQL" supported by SqlPredicate. The section also introduces other implementations of the Predicate interface (PredicateBuilder and PagingPredicate).

Another section of the second chapter covers using the hazelcast.xml file to configure Hazelcast for map eviction and covers the parameters named hazelcast.xml such as max-size, eviction-policy, and eviction-percentage.

Chapter 3: Going Concurrent

Chapter 3 of Getting Started with Hazelcast begins with more description on Hazelcast's IMap and IQueue, how they extend ConcurrentMap and BlockingQueue respectively, and the implications of those extensions. The chapter also introduces Hazelcast's distributed locking, its "cluster-wide unique identifier generator" (IdGenerator) and its JMS-inspired broadcast messaging system.

Chapter 4: Divide and Conquer

Getting Started with Hazelcast's fourth chapter begins with the assertion that Hazelcast's "distributed nature" of its "data persistence" enables applications to "achieve high levels of reliability, scalability, and performance." The chapter discusses partitions across clusters, partitions backing each other up, scaling up the cluster by adding more nodes, acquiring and using a ReplicatedMap to ensure data is available on every node, using partition groups, merge policies (PassThroughMergePolicy, PutIfAbsentMapMergePolicy, HigherHitsMapMergePolicy, and LatestUpdateMapMergePolicy), and Hazelcast 3.5 cluster quorums.

Chapter 5: Listening Out

The fifth chapter of Getting Started with Hazelcast discusses using Hazelcast's distributed events and listeners. The chapter introduces "collection listeners" (EntryListener, ItemListener, MessageListener, QuorumListener) and "cluster listeners" (DistributedObjectListener, MembershipListener, LifecycleListener, and MigrationListener). After the listing and briefy introductions to each of these listeners, the chapter delves more deeply into how they are configured (XML and in Java code) and can be used.

Chapter 6: Spreading the Load

Chapter 6 of Getting Started with Hazelcast discusses how Hazelcast provides the "ability to share the computational power in the form of a distributed executor" and how this is useful for "applications that require a lot of computational and data processing power." The chapter includes examples of applying IExecutorService, PartitionAware, HazelcastInstanceAware, and EntryProcessor.

Chapter 7: Gathering Results

Getting Started with Hazelcast's seventh chapter is its "big data" chapter. This chapter opens with an explanation of why "big data has proven to be a bit of a trendy buzzword of late." Brief history and background of MapReduce is provided. I like the simple explanation and graphic used in this introductory discussion regarding MapReduce.

Chapter 7 moves from general MapReduce theory to discussion of the Hazelcast's MapReduce implementation. It demonstrates via examples use of Reducer, JobCompleteableFuture, and working with Hazelcast aggregations with Java SE 6 and much more concise JDK 8 syntax.

Chapter 8: Typical Deployments

The eighth chapter of Getting Started with Hazelcast discusses determining the proper deployment strategy with Hazelcast for different types of applications. The chapter talks about "separat[ing] our application from the data cluster" and introduces Hazelcast's DataSerializable interface. The chapter concludes with coverage of "various architectural setups" that are described briefly, illustrated with simple graphics, and discussed in terms of trade-offs (advantages and disadvantages of each approach).

Chapter 9: From the Outside Looking In

Chapter 9 of Getting Started with Hazelcast covers two approaches for interacting with Hazelcast for non-Java/non-JVM clients. The chapter introduces the Hazelcast Memcache Client and provides simple demonstrations of using it with Python and PHP. The ninth chapter also introduces Hazelcast's support for REST APIs.

Chapter 10: Going Global

The tenth chapter of Getting Started with Hazelcast looks at using Hazelcast in conjunction with cloud computing environments. The main focus of the chapter is on two alternate "mechanisms of cluster discovery" to IP multicast: manually controlled unicast configuration and discovery via Amazon AWS management APIs. These configurations are shown by example with changes to the hazelcast.xml XML file. The chapter also discusses how to "configure Hazelcast to push entries from our local cluster to a remote cluster by defining a Wide Area Network (WAN) replication policy."

Chapter 11: Playing Well with Others

The final chapter of Getting Started with Hazelcast discusses how "Hazelcast provides ... support to use and extend a few popularly used libraries" and how Hazelcast "provides implementations of standardized interfaces." The chapter demonstrates using Spring with Hazelcast (hazelcast-spring-3.5.jar) for dependency injection, using Hibernate with Hazelcast (hazelcasthibernate3-3.5.jar/hazelcasthibernate4-3.5.jar) for caching of "legacy applications," using Hazelcast as an implementation of the standard JCache API, using Hazelcast's Cluster Management Center (mancenter-3.5.war), and enabling Hazelcast JMX support.

Appendix: Configuration Summary

The appendix provides a single location reference of the various Hazelcast configuration examples presented in the book's chapters.

General Observations

  • The author of Second Edition of Getting Started with Hazelcast, Mat Johns, is one of two currently highlighted Hazelcast Heroes. His "Hazelcast Cred" states, "Literally wrote the book on Hazelcast, entitled 'Getting Started with Hazelcast'."
  • The title "Getting Started with Hazelcast" is appropriate for this book. It delivers on providing a good starting point for installing, configuring, and beginning to use Hazelcast.
    • A reader of this book will be well equipped in terms of concepts and vernacular to reference Mastering Hazelcast, the highly approachable Hazelcast documentation, and blogs and forums on Hazelcast with the background and context provided by this book.
    • Other than the Hazelcast Manual and Mastering Hazelcast (both of which are provided by Hazelcast), the only book I have found devoted to Hazelcast is Getting Started with Hazelcast.
    • I appreciate that Getting Started with Hazelcast covers using Hazelcast in different architectural setups and provides a narrative the shows how differente Hazelcast features or deployments can be used to address different needs. The book not only shows the semantics or mechanics of using Hazelcast, but it also discusses why one would choose to use Hazelcast in different ways in different situations.
  • It's typically a good sign regarding a technical book's reception when there is more than one edition of that book. A quick browse of book reviews of the first edition of Getting Started with Hazelcast shows that this book has been generally well received.
    • The first edition of Getting Started with Hazelcast provides instructions on downloading and uses Hazelcast 2.6 for its examples while the second edition references Hazelcast 3.5.
    • The first edition of Getting Started with Hazelcast was published in August 2013 and this second edition was published in July 2015.
    • The second edition is roughly 20 pages lengthier than the first edition.
    • The second edition of Getting Started with Hazelcast adds three new chapters on trendy topics related to Hazelcast: 7 ("Gathering Results" / Big Data), 10 ("Going Global" / Cloud), and 11 ("Playing Well with Others" / Spring/Hibernate/JCache/JMX).
  • Most of the graphics in Getting Started with Hazelcast are simple graphics with black font on white background, though there are a few graphics with color and a few color screen snapshots in the electronic version of the book.
  • Code listings are black text on white background with no syntax color highlighting and no line numbers.
    • This was a more noticeable disadvantage when I was moving between the Hazelcast Manual, Mastering Hazelcast, and Getting Started with Hazelcast at the same time and the former two electronic publications have color-coded syntax.
  • There were a few minor grammar errors and typos, but for the most part Getting Started with Hazelcast is written in a format that's easy to read and comprehend.
  • I was happy to see coverage of JCache (JSR 107) in the second edition of Getting Started with Hazelcastr and to see brief mention of enabling Hazelcast's JMX support for monitoring.
  • I would have liked to see a bit more discussion in Getting Started with Hazelcast regarding the differences between Hazelcast Enterprise and Hazelcast.

Conclusion

Overall, I can recommend the second edition of Getting Started with Hazelcast to prospective readers. My only reservation regarding recommendation of this book has nothing to do with any fault of the book, but rather is a result of the excellent Hazelcast documentation that is already available. The Hazelcast Documentation and Mastering Hazelcast are both excellent resources with the former more introductory and the latter more intermediate. It is worth noting that downloading Mastering Hazelcast does require providing one's name and an e-mail address. I personally found that Getting Started with Hazelcast provided a nice overview and background about Hazelcast and why it's important that enabled me to quickly use these additional resources as well.

Tuesday, August 18, 2015

Setting PostgreSQL psql Variable Based Upon Query Result

When using PostgreSQL's psql command-line tool to interact with a PostgreSQL database via operator interaction or script, it is not uncommon to want to set psql variables based on the results of a particular query. While PostgreSQL's procedural language PL/pgSQL supports approaches such as SELECT INTO and assignment (:=) to set PL/pgSQL variables based on a query result, these approaches are not supported for psql variable assignment.

The typical way to make a psql variable assignment is via use of \set. This allows for the setting of the psql variable to a literal value. However, there are situations in which it is desirable to set the psql variable based upon the result of a query. This is done with the \gset option in psql. Unlike the \set operation in psql which sets a variable with an explicitly specified name to an explicitly specified literal value, \gset implicitly names the psql variables after the names of the columns (or aliases if columns are aliased) returned by the query to which the \gset is associated. The \gset is specified after the query (no semicolon generally on the query) and there is no semicolon after the \gset statement (just as no semicolon should be placed after a \set statement).

It is easier to see how \gset works with a code sample. The next code listing shows a small psql file that takes advantage of \gset to set a psql variable named "name" whose value that was set by the query is displayed using psql's colon prefix notation to "echo" its value.

CREATE TABLE person
(
   name text
);

INSERT INTO person (name) VALUES ('Dustin');

SELECT name FROM person \gset
\echo :name

DROP TABLE person;

In the previous code listing, lines 8-9 are the relevant lines for this discussion (the remainder of the lines are for setup and teardown of the demonstration). Line 8 contains the query (sans semicolon) followed by \gset. A psql variable of 'name' is set by that as evidenced by the echo-ing of its value in line 9. The output showing this works looks like this in a psql terminal window:

CREATE TABLE
INSERT 0 1
Dustin
DROP TABLE
Additional Considerations When Using psql's \gset
  • Placement of a semicolon between the query and the \gset affects the output.
    • Placing a semicolon after the query and before the \gset will execute the query and display the query results before setting the variable(s).
    • Leaving the semicolon out will execute the query to populate parameters with the names of the query's columns and aliases, but will not display the actual query results.
  • There should be no semicolon after the entire statement and placing a semicolon after the \gset will mess up the variable setting.
    • Error: invalid command \gset;
  • Query being used to set variable via \gset should return exactly one row.
    • ERROR: more than one row returned for \gset
  • When a column in the SELECT clause of a query associated with \gset is aliased, there are actually two psql variable names by which the returned value can be accessed. They are the column name and the alias name.
    • This allows a psql developer to alias a predefined column to any name he or she prefers for the variable set by \gset.

Conclusion

When using psql, use \set variable_name variable_value to explicitly set a psql variable with the name provided by the first argument and an associated value provided by the second argument. To set a psql variable based on query results, append \gset after the query (without semicolon generally) and access returned values by column names (or by columns' aliased names).

Monday, August 17, 2015

Procedure-Like Functions in PostgreSQL PL/pgSQL

PostgreSQL does not support stored procedures in the sense that a database such as Oracle does, but it does support stored functions. In this post, I look at a few tactics that can make the use of a stored function in PostgreSQL (stored function and its calling code both written in PL/pgSQL) feel like using a stored procedure. These simple approaches allow developers to use PostgreSQL stored functions in a manner that is more consistent with use of stored procedures.

Stored procedures and stored functions are very similar and, in fact, I've often heard the term "stored procedure" used interchangeably for stored procedures and for stored functions. For purposes of this post, the essential differences between the two can be summarized as:

  1. Functions are created with the FUNCTION keyword and procedures are created with the PROCEDURE keyword.
  2. Stored procedures do not return a value, but stored functions return a single value.
    • The stored function's return value can be used in SELECT statements.

Because PostgreSQL PL/pgSQL only supports stored functions, the defined functions need to declare a return type. Fortunately, in the case of our emulated "stored procedure," we can declare void as the return type. This is demonstrated in the code listing below for a "Hello World" implementation written in PL/pgSQL.

CREATE OR REPLACE FUNCTION helloWorld(name text) RETURNS void AS $helloWorld$
DECLARE
BEGIN
    RAISE LOG 'Hello, %', name;
END;
$helloWorld$ LANGUAGE plpgsql;

With a PostgreSQL stored function with void return type written, we now can invoke it from a client. In this case, I will look at three approaches for calling the stored function from other PL/pgSQL code.

PL/pgSQL: Invoke Function Via SELECT Statement

One approach for calling the stored function in PL/pgSQL code is to use SELECT INTO. The most obvious disadvantage is that, in the case of a procedure-like function returning void, nothing useful being selected and so the variable being selected into must be ignored anyway. The next code listing demonstrates using SELECT INTO to invoke the procedure-like function. The variable in this example, called "dumped", will not have anything useful selected into it, but this statement successfully invoke the stored function. Besides the line shown here, I also need a line in the DECLARE section to declare the "dumped" variable.

SELECT INTO dumped helloWorld('Dustin');

PL/pgSQL: Invoke Function Via Variable Assignment

The PL/pgSQL assignment operator provides another way to invoke the procedure-like stored function. As with the previous example, this approach requires a variable ("ignored") be declared in the DECLARE section and then that variable is assigned the result of the function that returns void, making it effectively a throw-away variable as well.

ignored := helloWorld('Dustin');

PL/pgSQL: Use PERFORM to Explicitly Ignore Returned Value

The PL/pgSQL command PERFORM provides some syntactical advantages when invoking procedure-like stored functions. This command does not require a PL/pgSQL variable to be declared. This saves the line to declare the variable and avoids the pretense of setting a variable that is never really set. It's really just a shortcut for SELECT, but it's syntactically sweeter and makes for more readable code because code maintainers don't have to figure out that a statement that appears to make an assignment actually does not do so.

PERFORM helloWorld('Dustin');

Conclusion

Although PostgreSQL only supports stored "functions" (and not stored "procedures"), it provides syntax that allows for functions to take on procedure-like qualities. BY allowing stored functions to return void and be to called from PL/pgSQL code via PERFORM that expects no result value, PostgreSQL allows client code to appear as if it is invoking a stored procedure rather than a stored function.

Monday, August 10, 2015

JavaScript Survival Tools for the Java Developer

Although I've worked with JavaScript a bit here and there over the years, I have not used it as regularly or consistently as I've used Java. I have also not used it for anything nearly as complex as for what I've used Java. Therefore, it is not very surprising that moving from commonly used Java to seldom used JavaScript offers some pain points. This post looks at some tools and techniques Java developers (and developers familiar with other class-based and statically typed languages) might find helpful in transitioning to JavaScript development.

Be Aware / Beware the JavaScript Gotchas: Helpful Online Resources

Every programming language has its share of gotchas, sharp edges, traps, quirks, and corners, but it is difficult for me to think of a programming language (other than perhaps Perl) that has more of these in its fundamental design than JavaScript. The developer who is aware of these is likely still going to run into these occasionally, but awareness will help the developer reduce the number of times he or she falls into the traps.

Given the importance of understanding these JavaScript nuances, the first category of tools is online resources that provide warnings about these nuances and their effects. A good resource providing an overview of JavaScript Gotchas is Seven JavaScript Quirks I Wish I’d Known About. Other resources useful in getting familiar with nuances of JavaScript include Short Guide to JavaScript Gotchas, A Beginner's List of JavaScript Gotchas, and What are the top JavaScript pitfalls?

I have not read the book JavaScript: The Good Parts, but many Java developers who have written about their experiences with JavaScript have mentioned that this book has helped them to better understand how to properly use JavaScript. The recent blog post JavaScript Debugging Tips and Tricks is also very useful in learning about debugging JavaScript.

For me, the biggest (in terms of time spent) issues using JavaScript while using Java is the different handling of scope (and related issues such as hoisting and different meanings of this keyword that take 40 minutes to explain and differ depending on which mode is being used). JavaScript's scope rules are a bit complex (see this StackOverflow thread for a taste of how complex), but boil down for me to essentially "use var often," "embrace functions for controlling scope," and "braces don't mean anything in specifying scope."

See Where You're At with console.log

The ability to log an application's state and a description of an application's behavior is useful in programming and debugging in any programming language. Because JavaScript is a weakly and dynamically typed language, this ability to see what's happening at runtime is even more significant in JavaScript where a compiler is not going to help find "obvious" typos and mismatched interfaces. In the old days, one was typically stuck with invoking alert() to see these types of logging details. This approach lacked subtlety and placed a popup on the screen that the user clicked "OK" on to close. The advent of console.log() is a welcome addition to the JavaScript developer's tool set.

The biggest downside of console.log may be that it's not part of the DOM or JavaScript standards. However, modern major browsers all seem to support it. For those worried about a target environment browser not supporting it, a fall-back to alert can be used. The obvious advantage of console.log is that modern browsers support special windows or areas in the browser in which messages are logged rather than rendering a pop-up that requires user acknowledgement to close. For Java developers familiar with logging using Log4j or java.util.logging, console.log is much closer to what they're used to than alert.

It's also worth mentioning here that non-standard console.dir can be very handy, with its advantages being more obvious in certain browsers than others.

See Where You've Come From with console.trace

The non-standard console.trace is similar to console.log in that it's non-standard but supported by modern web browsers. What makes console.trace different than console.log is that console.trace logs the stack trace of the JavaScript execution at time of the logging invocation. This is similar to logging a thread's current stack trace in Java.

Feel at Home with Constructor Functions

Java developers are comfortable working with objects. Although JavaScript's object-orientation is prototype-based rather than class-based, the use of constructor functions makes JavaScript objects feel more like Java objects. I have discussed the use of constructor functions in greater depth in the post JavaScript Objects from a Java Developer Perspective. The book Object-Oriented JavaScript provides a thorough introduction to using JavaScript objects and I have reviewed this book in an earlier post.

Override JavaScript Objects' toString() Implementations

JavaScript, like Java, provides an implementation of toString() at its highest Object level. All JavaScript objects automatically inherit that toString() implementation via Object.prototype.toString(). Like Java's, the JavaScript Object.toString() provides only minor informative value. The same advantages that come from overriding Java class's toString() implementations apply when overriding JavaScript objects' toString() implementations. Overridden and customized JavaScript objects' toString() methods are especially helpful when used in conjunction with the just-discussed console.log.

I have written about how to override JavaScript objects' toString() methods in the posts JavaScript Objects from a Java Developer Perspective and A Java Developer's Perspective on the Power and Danger of JavaScript's Object Prototype.

Enjoy Java's Write-Once, Run Anywhere Paradigm in JavaScript

Although modern web browsers more consistently implement JavaScript features these days than they used to, there are seemingly countless frameworks available that abstract specifics of browsers' DOMs and other implementation specifics away from the developer. Arguably the most famous and most used of the JavaScript frameworks is jQuery. Most of the Java developers I know who speak positively of JavaScript are developers who use a framework such as jQuery to abstract away their need to worry about DOM and other browser implementation nuances and idiosyncrasies. As I have revisited JavaScript development off and on over the years, I've been impressed with how much the frameworks have improved in recent years compared to the quality and dearth of them several years ago. jQuery was one of the first to really make a difference in JavaScript and several introduced since then have had similarly significant impacts. Java developers understand well the value of being able to write the same code once and have it work on multiple targeted environments.

Another framework that has been popular of late, including among Java developers, is AngularJS.

IDE Familiarity Might Reduce Homesickness

A good IDE can be a valuable tool for anyone wanting to learn the basics of a new programming language. The many "holy wars" over which IDEs and text editors are best provide evidence that many developers prefer IDEs and text editors that they are used to using. Fortunately for Java developments, the dominant Java IDEs (NetBeans, IntelliJ IDEA, Eclipse) provide JavaScript development support.

Play with JavaScript Using Java's Implementation

The Oracle HotSpot JVM provides a JavaScript implementation: Rhino before Java 8 and Nashorn since Java 8.

Play with JavaScript with jsFiddle

If you read many StackOverflow answers tagged as JavaScript, you're likely to see people posting there mentioning that they had tested suggested implementations on jsFiddle. This handy online tool allows developers to "try out" JavaScript to see how it performs without need to set up a browser or other environment.

Check JavaScript with JSLint

For Java developers used to the Java compiler catching all types of minor syntactic issues, JavaScript can sometimes seem too forgiving as it quietly doesn't do what the developer is expecting because of some bad syntax. The online JSLint (The JavaScript Code Quality Tool) is very helpful in identifying JavaScript syntax issues. jslint4java is a Java wrapper for JSLint.

Debugging is Even More Helpful in JavaScript

The ability to debug code is important in any programming language, but it feels even more important in dynamically typed languages such as JavaScript where more errors are pushed to the runtime instead of being encountered earlier during compile time. There are several tools available for helping debug JavaScript applications. These include tools mentioned here such as jsFiddle, but also include browser-specific tools such as the famous and frequently used Firebug and the browers' debugger implementations (Firefox Debugger, Internet Explorer F12 debugger, and Chrome's JavaScript Debugger.

Use a Java-like Abstraction of JavaScript

For the developer who prefers static typing and some of the other advantages of Java and other languages, an approach to more comfortably "writing JavaScript" is to use a superset or abstraction of JavaScript with static characteristics. The idea of compiling from a more statically typed language into JavaScript is not new to Java developers as Google Web Toolkit (GWT) has done this for a long time. A newer implementation of this concept is provided as DukeScript, which is described as a "new technology for creating cross-platform mobile, desktop and web applications" that "are plain Java applications that internally use HTML5 technologies and JavaScript for rendering."

Another option for those wishing to generate JavaScript while using constructs they are familiar with in Java is use of TypeScript, "a typed superset of JavaScript that compiles to plain JavaScript." TypeScript provides stronger typing, classes, interfaces, and other concepts that Java and C# developers are familiar and comfortable with. Many of us who use Java, C++, C#, and languages such as these have learned to love the advantages of strongly typed programming languages and TypeScript presents many of those advantages to us in JavaScript.

Conclusion

JavaScript has become a widely popular and ubiquitous programming language. In several ways, it is quite different from other popular programming languages such as Java and C#. Although experience in any one programming language is generally beneficial in learning a different programming language, there have actually been times that my "thinking in Java" has almost been a detriment when writing JavaScript due to some of the subtle (from a syntax perspective) but drastic (from a behavior perspective) differences in the two languages. This post has outlined some of the tools that I've found helpful for being more effective when developing JavaScript from a primarily Java background.