A Quick Intro Into Yeoman.io

When I built my first HTML5 game, Super Space Odyssey, I did things the hard way.  I created everything from scratch and I did not leverage a game engine to do some of the heavy lifting.  I was looking around on the web and found several options, but I decided to go with Crafty.js for my next game.

As it turns out I get distracted easily…  I found, just like every framework, setting up dependencies and the structure of the application is tedious and time consuming.  While searching the Crafty site.  I found the developers at Crafty recommend a particular structure for games, and they also supplied a handy boilerplate to get things rolling (CraftyBoilerplate).

I have wanted to create my own Yeoman generator and now I have a reason.  Yeoman.io is a self declared “workflow; a collection of tools and best practices working in harmony to make developing for the web even better.”  This sounded pretty Totes McGoats, but I didn’t really know how awesome it could get.  With a little terminal kung fu, I was able to setup a new game faster and a lot easier (first world problems, I know).

Before I can get started, I need to understand essence de Yeoman.  There are three main pieces to Yeoman: Yo, Grunt, and Bower.  Yo is the templating engine that I will use to push the scaffolding for Crafty games.  Grunt is the javascript based task runner that I use for minifying, build, and deploy my game to a local node server.  Bower does all the dependency management so we don’t have too!

To start things off I used Node Package Manager (npm) to install yeoman, which installs Grunt and Bower as well.  Both node and npm are required for yo, grunt, and bower.

npm install -g yo

This basically installs npm so it is accessible through the terminal globally.  Once yo (Yoeman) is installed we can start creating a generator.  Generators are what we want to create, they are the magic that makes starting new projects easier.  Specifically, this is what will scaffold our application, setup our local dev env, and allow us to use bower dependency management when building our games.  Once the game is created the scaffolding will setup the Gruntfile.js, package.json, and bower.json file.

Generator: This is the definition of the scaffolding.  The generator will be used to start a new project.  It is basically a command line installer.  It will ask you various questions about your project and push out the boilerplate you need so you can just get things done right (GTDR).

Gruntfile.js: This file is created by the generated and used by Grunt to build, deploy, setup a webserver, minify, clean the project, etc.  We will use grunt during the development phase to help us test our site on a local node server.

Package.json: This file has metadata about the project.  It can be used by NPM to publish data.  It isn’t needed for much more than that.  This too will be created by the generator, but the generator will also have one to describe itself as well.

Bower.json: Here is where we describe the dependencies of our project.  We can use Bower to update and pull our projects dependencies.  No more navigating to sites, downloading JS files, unpacking, and inserting the scripts.  Nope, only magic here.

Ok, so now we need to get started with our generator.  I don’t know about you, but I don’t want to start from scratch.  After all we are going through this in the first place because we don’t want the tedious work.  So my first step is download yo’s generator-generator.  This bad boy is all the scaffolding you need to create yo generators.

npm install -g generator-generator

Once the generator to scaffold generators is installed we need to create a new directory to contain our generator code.

mkdir /Users/shawnmeyer/Documents/Projects/Generators/generator-crafty
yo generator

This will spit out various questions.  Fill them in accordingly to have the generator setup your local project, but be sure to name you directory generator-blank where blank is the name of your generator.  This naming convention is important.  The second part is how we will tell yo to scaffold using your generator.  In my scenario it would be:

yo crafty

In your new generator project you will have a package.json file.  This file contains the metadata and dependencies of your application.  When the user installs your generator using the Node Package Manager, the npm will download and install of the dependencies of your generator if you need them.

I am not going to show you all the steps I took to create the generator-crafty package, but  you can check it out at https://github.com/sgmeyer/generator-crafty.  I do have some handy tips.  When you are creating your generator you can test locally a couple of ways.

First, you can run unit tests by entering the command in the root director of your generator.  This command looks at your test/*js files and runs those tests in the terminal.

npm test

Next you can link the generator to npm and test the generator to create the scaffolding.  This is handy if you don’t want to publish your generator and you just want to horde and kee all your fancy generators to your self.   Run this command in the root directory of you generator.

npm link

And then you can use yo to run your generator. To run your generator and setup the scaffolding navigate to your directory.  Since I used generator-crafty as my name I will go to the crafty directory.  Use the command below will tell yo to run generator-crafty, and the generator will setup all the scaffolding we defined in the Index.js file of our generator.  I didn’t show this step, but you can check out my code to see an example (http://github.com/sgmeyer/generator-crafty”>Generator-Crafty Repo).

yo crafty

Assuming you added some grunt support to your Gruntfile.js you could also use the grunt command in this directory to execute your tasks.  In my code I setup a custom grunt task that creates a node server, opens a browser, and runs live reload to detect updates to the JS and HTML to give you live preview.

grunt server

Grunt lets you setup a default task as well that it pulls from the Gruntfile.js or you can call tasks by name.  In my example I am calling the server task by name, however I have it setup to work by calling grunt without a task name.

Once the scaffolding is done running you might want to add some more magic to your project and this is where Bower comes in.  Let’s say we want to add a dependency to underscore.  Bower will let us do this.

bower install --save underscore

This will install your underscore dependencies in the bower_components directory of your new project (in my crafty directory) and it will update the bower.json file’s dependency list. if you don’t want to update your bower.json file you can drop the –save flag.  Again, Bower is just a package manager.  It does not replace RequireJS or other front-end dependency loader.  Bower simply installs and updates your local dependencies.

When new developers pull your project from source control, they can run the command below to install and pull the projects dependencies. This makes it convenient and you won’t have to commit your js files.

bower install

This is just a quick glimpse at the Yeoman workflow.  There is a ton of power available to us by using Yo, Grunt, and Bower.  These tools really can work together and help us be more productive.

Sharing Forms Authentication Tickets Between Applications

Recently, I ran across a fun challenge. I have an application modularized across multiple websites. I also need to allow users to navigate between these applications seamlessly with a single authentication mechanism or a single sign on and minimal HTTP redirects. What I have traditionally done is link the two applications using some form of SSO (e.g. encrypted query strings or SAML). My personal opinion is that these solutions are ideal when talking between two separate entities such as integration between two websites owned by two different organizations. After thinking about this problem I wanted to eliminate redirecting the user for SSO purposes. Consider my typical scenario.

As a user I click a link on my intranet to log into the application. The initial URL sends me to a website that attempts to pull my credentials and builds the SSO request. Once the SSO request is built the user is returned an HTTP response directing them to POST or GET the new a URL with the SSO request. The service provider then verifies the SSO request and then redirects the user to their landing page. That is a lot of work to get into the application, but for communication between entities it is pretty standard.

When a user originally authenticates with the application from the Identity Provider this is pretty standard. My personal opinion is that once a user authenticates into your website the user should not see authentication between multiple websites owned by the service provider. I do not like the idea of exposing the authentication between each application and that is what multiple redirects do. Instead, I want the applications to share authentication. So when the user goes from one application to the next they are sent directly to the URL and not an intermediary sign on handler that eventually redirects the user to the requested page. I want one of the applications to handle authentication and allow users to move between applications without requiring re-authentication on each website. After doing some digging I found that ASP.NET provides an easy solution, sharing the forms authentication ticket.

The Authentication Ticket:

The basic purpose of the Authentication Ticket is to tell your web application who you are and that you are authenticated. When using forms authentication, each authenticated user will have a forms authentication ticket. This ticket is encrypted, signed, and then stored in a cookie (I am ignoring cookie-less configurations). It is also important to know, ASP.NET uses the Machine Key secure your authentication cookie. According to the MSDN .NET 1.0 and 1.1 are different than .NET 2.0 and later. For the purpose of this article we are going to assume .NET 4.5, but I will mention a configuration for .NET 2.0 SP1. For more information on authentication tickets check out the MSDN article http://support.microsoft.com/kb/910441.

Configuring Forms Authentication

  <machineKey validationKey="validationKeyA" decryptionKey="decryptionKeyA" validation="SHA1" decryption="AES" />
  <authentication mode="Forms">
    <forms name=".ASPXAUTH" domain="shawnmeyer.com"></forms>

This configuration needs to be configured in every application’s Web.config file. The machineKey node is used to allow applications to read the authentication ticket across sites. If one of your applications is a .NET 2.0 application that was if it was upgraded to .NET 4.5 and it still uses the legacy mode you might need to add the compatibilityMode=”Framework20SP1” attribute to the machineKey element to all applications. If not then you can ignore this attribute.

The authentication element is where we configure forms authentication. The child element is used to set the properties of forms authentication. Each application must have the same name as this is what the cookie will be named. If your applications live in the same subdomain you can ignore that attribute, however if they live across multiple subdomains and the same domain you will need to set the domain without a subdomain.


  1. If your browser is not sending the cookie this might be caused by the domain attribute needing to be set. When reading the cookie the domain should read '.shawnmeyer.com' and the web.config attribute for domain should read domain="shawnmeyer.com".
  2. If your browser is sending the cookie, but the cookie does not show up in the server's HttpCookieCollection it means the cookie is probably not passing validation. Ensure the machineKey elements match across each application. If they do match you should ensure they are using the same compatibility mode.

Creating the cookie

In the application that hands authentication you can create and store the authentication ticket. This will setup the cookie and ticket so it can be consumed by each application.

// Gets the cookie
var cookie = FormsAuthentication.GetAuthCookie(username, rememberMe);

// Gets an authentication ticket with the appropriate default and configured values.
var ticket = FormsAuthentication.Decrypte(cookie.Value);
var newTicket = new FormsAuthenticationTicket(

var encryptedTicket = FormsAuthentication.Encrypt(newTicket);
cookie.Value = encryptedTicket;

Reading the Authentication Ticket from the Cookie

To read the cookie you can use this code.

// This could throw an exception if it fails the decryption process. Check MachineKeys for consistency.
var authenticationTicket = FormsAuthentication.Decrypt(authCookie.Value);

// Retrieve information from the ticket
var username = authenticationTicket.Name;
var userData = authenticationTicket.UserData;


Once you have these applications configured to share the authentication ticket. From this point we can more seamlessly handle authentication between sites. Enjoy!

Using Enterprise Library To Decrypt a Java Encrypted Message

Last week I stumbled upon a tricky problem when tasked with encrypting and decrypting messages sent between a Java and .NET application.  The Java application was responsible for encrypting a message while the .NET application consumed and decrypted the message.  Ignoring the debate over the usage of an asymmetric algorithm, I was required to use a symmetric algorithm, specifically Rijndael.

Since we are primarily a .NET shop I was expected to use the Enterprise Library, for good reason, as it simplifies encryption and decryption.  I quickly realized the simplicity gained on the .NET side was a trade off for additional complexity on the Java side.  To start let us look at the basic Java code needed to encrypt a string.

public String encrypt(String auth) {
  try {
    byte[] key = // shared key as Base 64
    byte[] iv = Base64.decodeBase64("IV STRING");

    SecretKeySpec spec = new SecretKeySpec(key, "AES");
    Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");

    cipher.init(Cipher.ENCRYPT_MODE, spec, new IvParameterSpec(iv));
    byte[] unencrypted = StringUtils.getBytesUtf16(auth);
    byte[] encryptedData = cipher.doFinal(unencrypted);
    String encryptedText = new String(encryptedData);
    encryptedText = Base64.encodeBase64String(encryptedData);

    return encryptedText;
  } catch (Exception e) { }

  return "";

From the code above you can see we have a method that performs a pretty basic encryption.  Now let us look at the .NET code we will use to decrypt the message encrypted using the above method.

string message = Cryptographer.DecryptSymmetric(RijndaelManaged, encryptedText);

Next, I built the applications and tested out my encryption and decryption methods and they soon failed.  As you can see above the .NET code does not appear to use an IV.  Thankfully, Microsoft published the Enterprise Library code.  After taking a look I noticed something interesting.  Microsoft randomly generates an IV each time the EncryptSymmetric method is called and then prepends the IV to the encrypted message.  Accordingly, the DecryptSymmetric method assumes the IV will be prepended to the encrypted message and extracts the prepended bytes before attempting to decrypt the message.   I then went back to the Java application and added the following code.

encryptedText = Base64.encodeBase64String(encryptedData);

byte[] entlib = new byte[iv.length + encryptedData.length];
System.arraycopy(iv, 0, entlib, 0, iv.length);
System.arraycopy(encryptedData, 0, entlib, iv.length, encryptedData.length);

return Base64.encodeBase64String(entlib);

Again, I built the code and to my frustration it did not work.  However, this time the .NET code actually decrypted the message, but the output resulted in funny characters.  Now I suspected it was an encoding problem, but I was still confused as I knew the Enterprise Library used UTF-16 or Unicode encoding.  I decided to open up the code in Visual Studio and I realized the Enterprise Library did indeed use Unicode encoding, however it used little endian byte order.  So I went back to the StringUtils API and found a static method called getBytesUtf16Le(String string).

So I replaced:

byte[] unencrypted = StringUtils.getBytesUtf16(auth);


byte[] unencrypted = StringUtils.getBytesUtf16Le(auth);

After building the code I finally had this little problem solved.  This turned out to be a bit more complex than I initially thought, however thankfully Microsoft provided the source code needed to solve the problem.

Only You Can Prevent Crap Code

Over the last few years I have evangelized about writing clean code.  I have read numerous resources, talked to developers, and spent time developing.  I have felt the pain of an ugly system, I have felt the love of beautiful code, and I have brought down pain and misery upon future developers.   Recently, I have reflected on the last eight years of my professional career and I realized why I obsess about clean code.   For me it comes down to respecting other developers and respecting my users.

What the hell are you talking about you ask?  Well, besides thinking that was a mean way to ask a question, I believe writing good code is an obligation and a way to show future developers you respect their time.  We all have spent a lot of time reading code.  Experience shows that reading code can be time consuming and it is typically more difficult than writing new code.  The more complex the code the harder it is to read and understand.  If the code has poor variable names, no encapsulation, high coupling, god classes, misleading or stale comments, hard coded dependencies, custom design patters, and the list goes on then it is going to take a lot longer to understand and modify the code.

It is easy to write bad code and maybe the developer thought they had to do this to meet a tight deadline.  This might sound convincing, however it just cost the next developer over an hour to understand the code before they could even start modifying the code and now they have to cut corners to meet their deadline.  On top of that, their level of confidence in the new code went down and they introduced a new bug.  If that developer doesn’t refactor the code, then the hour they spent figuring it out was wasted.  The next developer will have to spend another hour or more to understand the code.  Seems like a pretty awesome way to develop.  The problem gets worse overtime and inevitably something really bad is going to happen.

If you don’t care about the next developer, which isn’t professional by the way, then care about your users.  Initially, you could argue you wanted to ship new features to the users as quickly as possible.  This sounds great, but at what cost?  Future releases.  As you continue to build and enhance your software it will take longer to ship new features and you will increase the likelihood of defects.  Is this really what you want to give the user?  Of course it is, if you hate your users enough.  The users, PM, or stakeholders might be pushing for code quickly, but it is our responsibility to set expectations.  Writing sloppy code hurts the organization and the user, period.  Fight for good code.

I like to believe that most developers have good intentions and want to write SOLID code (http://en.wikipedia.org/wiki/SOLID_(object-oriented_design).  So why are we stuck with systems that turn out to be evil, gnarly balls of hate? Code decays only if you let it.  Code rot is the guy you invite to the party that you know you shouldn’t because he invites all of his friends from prison.  Before you know it your house turns into a coke den full of dudes with Russian prison tattoos and rap sheets longer than an emo kid’s journal.  This is not a good place to be, trust me.   The bottom line is we have to be professional and write good code.  It is our job to create a system that is clean and maintainable.  Developers are responsible to fight for the health of the code.  We cannot expect PMs, stakeholders, managers, or anyone else to understand the importance and fight for the cause.   If you want to be a hero be like Batman he cleans up the streets of Gotham!  Batman is badass and you can be too.  Aquaman would stay late every day for a month to reach a deadline.  He would cut corners instead of set expectations.  Don’t be Aquaman, he sucks (http://www.youtube.com/watch?v=bBLaRSmqGS8).  Do yourself a favor, be Batman and not Aquaman.

Edit: After reflecting on this article I want to emphasize that no one developer is perfect and we have and probably will write dirty code at some point. I would like to see more developers provide constructive feedback about code to help each other learn and produce great software. After all, as we learn and progress our source should evolve. I believe sharing code is about helping and learning. It shouldn’t be about throwing daggers.

Don't Just Say You're Doing Code Reviews

I keep hearing how wonderful code reviews are to help improve the quality of code.  There is a lot of information about how code reviews can improve code design, reduce defects, and share knowledge.  I have participated in a number of code reviews, and I can definitely say they have the potential to be a powerful tool.  My problem with code reviews is not the concept, but the follow through.

I think it can be really easy to implement a code review process, but ultimately it takes discipline to make the process effective.  Like any good idea it is all about execution.  I believe there is a reason some are seeing true benefits from a code review while others are seeing very little in return.

Below are two lists of responsibilities for both the developer and the reviewer.  Of course this is not a list to end all lists, but it is definitely a sound starting point.  If your team can be disciplined to execute on these responsibilities you should start seeing consistent return from your code reviews.

Responsibilities of the reviewer:

  1. evaluate the code and not the developer
  2. understand the requirements and intent
  3. ask to see the tests
  4. review the design
  5. collaborate to find ways to make the code better
  6. be engaged

Responsibilities of the developer:

  1. Do not take feedback personally and be prepared to collaborate
  2. reduce the size and scope of the review by having frequent reviews
  3. communicate the requirements and intent
  4. being willing to talk about the “what” and the “why” you did something
  5. be willing to make adjustments based on feedback
  6. be engaged