Migrating Git Repositories with Full History

Earlier this year GitHub announced a pricing change and unlimited private repositories. I really like using GitHub and have decided to migrate my free Bitbucket repositories over to GitHub. At first I was hesitant, because I wanted to retain my entire commit history. Well, with Git this is a pretty trivial task.

We are using some basic features in Git and it may be useful to check out the documentation for mirror and remote. Git mirror didn’t have a fancy fragement URL so here are those docs for convenience.

–mirror

Set up a mirror of the source repository. This implies –bare. Compared to –bare, –mirror not only maps local branches of the source to local branches of the target, it maps all refs (including remote-tracking branches, notes etc.) and sets up a refspec configuration such that all these refs are overwritten by a git remote update in the target repository.

In order for this to work, we’ll want to create an exact copy of the BitBucket repository locally. We’ll do this by creating a bare, mirrored clone.

 git clone --mirror https://your_username@bitbucket.org/your_username/your-git-repository.git

Once we’ve created the local clone, we’ll want to change the origin of the repository. Chances are, since this repository lives at Bitbucket, the origin is something like https://your_username@bitbucket.org/your_username/your-git-repository.git. We’ll want to swap this for the GitHub repository URL.

git remote set-url --push origin https://github.com/your_username/your-git-repository.git

After the origin remote was upated to point to the new GitHub repository we’ll need to push the mirrored repository.

git push --mirror

These steps above are a good way to migrate your respositories with full history. It is also possible to add a second remote to your existing repository, if you wanted to keep the existing origin. Here is a gist.

Introduction to Content Negotiation

Content Negotiation, simply put, is a way for a client and/or server to determine how content is requested and received over HTTPS (or HTTP, please be kind use HTTPS). Fortunately, for developers, content negotiation has been defined by the W3C and you can find the specification on their site (RFC2616 Section 12). There are three types of content negotiation, but for the purpose of this article we’ll focus on server-driven negotiation. That is when the agent or client requests content using HTTP headers to help the server make decisions on how to format data sent back to the client. Here are the headers that are typically used in content negotiation: Accept, Accept-Charset, Accept-Encoding, Accept-Language, etc.

The fastest way to see content negotiation in action is to open up Chrome development tools, navigate to the network tab, and go to any website. If you click on the requst you will notice Chrome automatically populates the Accept, Accept-Encoding, and Accept-Language with the values below. These two headers are how Chrome negotiates with the server how it prefers to recieve content for the given request.

GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

Let’s break this down a bit. The Accept header is used to specify the media type for data returned in the response body. The Accept header states a weighted preference for the desired media types. In this example Chrome requested HTML, then xhtml, followed by xml as it highest preference. After that, Chrome states it will accept an image/webp followed by anything. Chrome uses the Accept-Encoding header to specificy how it wants the response encoded. Finally, it requests the en-US language as its preferences otherwise a non-localized version of English second using hte Accept-Language header.

The server, if using server-driven negotiation, can opt to take any of these headers into account when making a decision on how to return the content. It is important to note the user agent or client does not control how content is returned. It simply declares preferences with each request. In server-driven content negotiation the server’s algorithm is reponsible for making a determination (or Best Guess as it worded in the specification). Most servers will look at these HTTP headers when deciding how to format content on the response.

As developers we often need to know more about content negotiation when building HTTP(S) based services. After consuming a few APIs and web services it becomes apparent that many web services have different implmentations and limitations with content negotiation.

Experimenting with content negotiation and the bits

Using Postman, I generated this request to an ASP.NET Web API running locally. The request uses the Accept HTTP header requesting the server respond with XML.

GET /api/Person HTTP/1.1
Host: localhost:65307
Accept: application/xml
Cache-Control: no-cache

By default, ASP.NET responded with the xml media type. The server made its decision based on the Accept header. ASP.NET has a more complex set of rules than this, which we will dig deeper into in a future article.

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-8?B?XFxnY2MuaW50XHVzZXJzXGhvbWVcc2hhd25tXHZpc3VhbCBzdHVkaW8gMjAxNVxQcm9qZWN0c1xEZW1vXERlbW9cYXBpXFBlcnNvbg==?=
X-Powered-By: ASP.NET
Date: Sat, 09 Jul 2016 11:27:20 GMT
Content-Length: 194

<Person xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Demo.Models"><Age>30</Age><FirstName>Some</FirstName><LastName>Dewd</LastName></Person>

Today, many developers prefer to use JSON. To receive JSON data over XML, simply update the request’s Accept header with the media type application/json.

GET /api/Person HTTP/1.1
Host: localhost:65307
Accept: application/json;q=0.9,application/xml;q=0.8
Cache-Control: no-cache

As one can see the server responded with equivalent data. This time, instaed receiving XML the server responded with a JSON string in the response body.

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Date: Sat, 09 Jul 2016 11:34:22 GMT
Content-Length: 47

{"FirstName":"Some","LastName":"Dewd","Age":30}

To show off more interesting behavior in ASP.NET’s server-driven content negotiation we can provide a media type not supported out of the box. Optionally, if one were to leave off the Accept HTTP header one would get the same result. The ASP.NET content negotiation algorithm cannot provide a response requested by the client, it will fall back to its default media type formatter (for .NET this is application/json).

GET /api/Person HTTP/1.1
Host: localhost:65307
Accept: text/html
Cache-Control: no-cache
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Date: Sat, 09 Jul 2016 13:16:56 GMT
Content-Length: 47

{"FirstName":"Some","LastName":"Dewd","Age":30}

ASP.NET is just one technology of many for building web services. GitHub also uses server-drive content negotiation, but implments a different behavior. For example, unlike ASP.NET WebAPI’s default behavior, GitHub does not fallback to a default media type formatter if one requests an unsupported media type. Instead, GitHub opts to respond with an error formatted in JSON.

GET /user/sgmeyer/repos HTTP/1.1
Host: api.github.com
Accept: application/xml
Cache-Control: no-cache
HTTP/1.1 415 Unsupported Media Type
Server: GitHub.com
Date: Sat, 09 Jul 2016 13:23:14 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 167
Status: 415 Unsupported Media Type
X-GitHub-Media-Type: unknown
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1468074180
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval
Access-Control-Allow-Origin: *
Content-Security-Policy: default-src 'none'
Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-XSS-Protection: 1; mode=block
X-GitHub-Request-Id: 325142E0:2FFB:80ACDE9:5780FAC2

{
  "message": "Unsupported 'Accept' header: [\"application/xml\"]. Must accept 'application/json'.",
  "documentation_url": "https://developer.github.com/v3/media"
}

Content negotiation is a great way to improve the developer experience of an API. One can provide conveniences for customers by allowing them to determine the format of data coming back from the service. There are many ways to implement content negotiation, but when understood it can be a power tool as a developer.

TFS Source Control Explorer Performance Issue

The last few weeks I have noticed a steady decline in the performance of TFS Source Control Explorer. I first noticed performance degradation when I was traversing the tree view of the directories under source control. Each time I expanded a node in the explorer window it would spin for 5-10 seconds at each level. After a while I began to notice get latest, shelving, and committing was coming to a snail’s pace. Today I had enough and decided to dig into the issue, admittedly I waited too long before solving this problem.

To keep this short and sweet TFS does not like to handle a workspace with over 100,000 files. The problem I was having is that each branch I created went into the same workspace. This was clearly a boneheaded move and one that was easy to mitigate. Ultimately, I decided to create a separate workspace for each branch. Once I had made this change TFS Source Control Explorer, and TFS management was much faster.

Installing Yeoman.io to Make Life Easier

Yeoman.io is a collection of 3 technologies. The idea behind Yeoman is to help developers accomplish the common, tedious tasks such as building, linting, minification, scaffolding, previewing/running our code, etc. Yeoman isn’t just a tool, but it is a companion for helping you at all stages of the development phase, that is from starting, developing, testing, and deploying a project. Earlier I gave a quick dive into the various components of Yoeman.io and you can check it out at A Quick Into Into Yeoman.io. Next, I wanted to show you how easy it is to install Yeoman to start taking advantage of its power.

First, you are going to need to have node.js and the node.js package manager (npm) installed. This is easy enough, simply navigate to node.js download page and follow the node.js instructions. By default the node package manager will be installed. Be sure not to omit that from the installation as we will need it to install Yeoman, Grunt, and Bower.

Once you have installed node.js open your terminal or command window. To test you have successfully installed node.js and npm type the following commands.

node --version
npm --version

If your terminal window spits out a version number then you are good to go. Now that node.js and npm are installed we need to install Yeoman. We are going to use npm to install Yeoman locally, and this will automagically install Grunt and Bower as they are dependencies of Yeoman. To install Yeoman npm makes this easy. With the terminal window still open run the commands below to being installation.

npm install -g yo

This will take a bit of time to download the app as well as it’s dependencies, but you only have to do this one time. Once, the Yeoman (yo) is done installing we can being using the tool. Unfortunately, Yeoman doesn’t come with many generators out of the box it does however allow you to download many generators that will meet your needs. So we can start downloading generators we will commonly use to help make Yeoman more useful. For the purpose of this we will use generator-angualar to scaffold our next application, and first we must download the generator.

npm install -g generator-angular

This generator provides instructions to yo for scaffolding your next angular project. There are many more generators available to you, but here are some listed on Yeoman.

Once the generator is installed you can begin using it. It is just that easy!

mkdir c:/project/projectName
cd c:/project/projectName
yo angular

After a few seconds of downloading your dependencies and scaffolding the application your new angular project is good to go. Each generator may ask you a set of questions about your application. These steps are to help Yeoman create a more customized project to meet your needs. Help Yeoman out so he can help you out.

Enjoy!

Securing An Mvc Application

When building an MVC application authentication is an important part when securing your website. I was recently creating a second application that consumed the authentication ticket from our main application. In my last post I showed how to share the forms authentication ticket between multiple applications, and now that we have the ticket being shared we need to plug it into our site.

If you recall from the last post the secondary application delegates the authentication and credential verification. Instead of using a form for the user to enter credentials, we validate the authentication ticket and establish the principal and identity. This can be done in the Application_AuthenticateRequest method.

// This is the Global.asax.cs file
public class MyApplicatoin : HttpApplication
{
    protected void Application_AuthenticateRequest(object sender, EventArgs e)
    {
        // Pulls the cookie name from the configuration (default .ASPXAUTH)
        string cookieNaame = FormsAuthentication.FormsCookieName;
        HttpCookie cookie = Context.Request.Cookies[cookieName];

        bool doesCookieExist =  cookie != null;

        if (cookieExists)
        {
                try
                {
                        FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(cookie.Value);
                        string[]  roles = //get your roles from somewhere.
                        FormsIdentity identity = new FormsIdentity(ticket);
                        GenericPrincipal principal = new GenericPrincipal(identity, roles);
                }
        }
    }
}

Once the authentication mechanism is in place we can handle the authorization by decorating controllers or actions with the AuthorizeAttribute. By tagging the controller with the AuthorizeAttribute we are saying that any action in this class will require the user to be authenticated. Since we did not provide any roles to the attribute it just prevents access to anonymous users. This is all good until you want to allow certain actions in a controller to be accessible by anonymous users such as a login action. We can enable anonymous users by tagging actions with the AllowAnonymousAttribute.

[Authorize]
public class AccountController : Controller
{
    [AllowAnonymous]
    public ActionResult Login()
    {
        return View();
    }

    // Requires authorization
    public ActionResult Index()
    {
        return View();
    }
}

This model for authorization seems pretty good until we want to add another protected controller. When adding another controller we realize we could easily forget to add the AuthorizeAttribute to the new controller or maybe another developer adds a controller and is not aware how to protect the code.

public class AdminController : Controller
{
    public ActionResult Index()
    {
        return View();
    }

    public ActoinResult ManageUserPasswords()
    {
        return View();
    }
}

You or a another developer could easily miss this detail. Now sensitive functionality is exposed to anonymous users and your application is just waiting to be compromised. We can avoid these mistakes by implementing a global filter. Instead of each controller and action opting into requiring authorization all controllers will require authorization and controls needing anonymous access must be tagged to opt out of requiring authorization.

public class FilterConfig
{
    public static void RegisterGlobalFilter(GlobalFilterCollection filters)
    {
        filters.Add(new HandelErrorAttribute());
        filters.Add(new AuthorizeAttribute());
    }
}

Our code uses the AuthorizeAttribute as a global filter every controller and action prevents anonymous users. If we wanted to continue to allow anonymous users on a certain action we simply decorate it with [AllowAnonymous].