The client-side vs server-side rendering debate


Ever since Twitter announced that they were moving back to server-side html rendering, everyone has been talking about how much faster it is and it has brought into question the recent trend towards client side rendering.

So which is better?  I believe that it depends on what you are building. Winking smile

In the case of Twitter, most users are simply looking at a read-only list of tweets. Server-side rendering works very well here because most of the information is read-only.  Now, imagine if you were looking at an editable list of tweets and you wanted to build an interface that allowed you to edit in place without a bunch of post-backs.  At this point you have to work your way through manipulating the html, extracting the changes from the html, and saving back to the server in an ajax request.  This will work for some simple scenarios, but you will likely end up using some kind of client side templating to make this process easier.  Eventually you will run into issues because you have two places that build the same html, and they will need to be in sync.

This just scratches the surface.  Now imagine that you have some business logic associated with the changes that are made.  If you are dependent on the server for rendering the html, you either have to duplicate that html rendering client-side or send everything (ugh) back to the server for re-rendering.

At some point you are adding unnecessary load on your server when the browser is perfectly capable of doing the rendering itself. Offload that work to your user’s browser.

In short, mostly read-only sites with a small amount of client-side manipulation are a good fit for server-side rendering, and interactive thick-client like websites are a better fit for client-side rendering.

How and why I build decoupled frontends


As the web has gotten more and more dynamic, I have been using JavaScript to update content on a page without constantly sending postbacks and rendering HTML server-side.  In many instances in the past, I have rendered the initial HTML server-side, and then later updated it via JavaScript.  The problem with this model, is that you end up with two different sets of code rendering the same HTML.  If you need to make a change, you need to update both your server-side and client-side templates.

Over the past year, I have begun to wonder why we generate HTML server-side when the browser is perfectly capable of doing it.  What if a desktop application had its UI rendered by its backend database?  Seems crazy, but that is essentially what traditional web development looks like.  Each time you click a button on a page, information is sent to the server, the server rebuilds the entire UI, and the browser then re-renders the entire page.  Both computers and browsers are getting faster, so why not pass that load onto the client? 

I have started to change the way I do web development by looking at the browser as more of a platform for hosting thick client applications.  The server simply becomes a datasource/API for my application.  I even take the concept as far as serving the JS/HTML/CSS from a separate IIS Site.  None of my static resources are ever processed by my serverside code.  I want my frontend to be completely separated from the API.  It shouldn’t know or care if the backend is written in .NET, Rails, or node.js.

Here are the advantages to building a completely decoupled frontend:

  • The frontend development team can work completely independently from the backend team.  For example, frontend development can be done on OSX even if the backend is written in .NET.
  • This obviously means that you have less dependencies to do work.  The backend team doesn’t need to have a frontend built in order to test their API, and the frontend team doesn’t need the backed to work in order to continue development. (I will explain how I do this later)
  • The scope of knowledge of each developer can be more narrow.  The backend developers don’t have to know about frontend and vice-versa.  JavaScript developers can be JavaScript developers and Rails/.NET developers can be Rails/.NET developers
  • You can have independent SDLC processes and CI strategies
  • You are left with an API that can be reused for mobile apps or other frontends
  • You pass the processing load of rendering HTML, etc off to the client
  • Security becomes easier to enforce at the API layer.  You don’t have to worry about a rogue WebForms control exposing secure data.  Yes, this can be handled as an application layer, but the opportunity for mistakes is greater

Here are some of the things I do/rules I follow when building decoupled frontends:

  • I chose backbone.js for its lack of opinions and flexibility (any of the dozens will do as long as they don’t tie your hands behind your back)
  • I do my frontend development in Sublime Text 2 and my backend development in Visual Studio
  • The frontend and backend are stored in separate git repositories
  • RequireJS is a great tool to modularize each part of your site
  • All of my templates are precompiled into JS files as RequireJS modules. When they load, they are compiled into JS functions and registered once with the template cache.
  • Modularity is the key to being able to build independent parts of your application quickly
  • Make your CSS modular with scopes as well.  (Good read –> http://smacss.com/book/)
  • Read this.  It will change the way you look at JavaScript
  • I try to avoid jQuery plugins.  I find that they try to do a lot that you don’t need and what you are trying to do can usually be accomplished with much less code.
  • I unit test/integration test my backbone views using Jasmine with Sinon.JS

The most important thing I do is use Sinon.JS to mock the backend server.  I don’t try and replicate every call, just enough to make the frontend functional without any backend.  This allows frontend development to occur without any dependencies on the backend.

A simple way to run unit tests across browsers with nodejs, socket.io, and Jasmine


I have been evaluating different test runners for Jasmine unit tests.

I wanted something that would meet the following criteria:

  • Simple to setup and run
  • Run tests in the actual browser
  • Fit into my workflow so that running tests is not painful
  • Works on windows

I looked at JSTestDriver, Selenium CI, etc., but I couldn’t find one that was simple, ran in a browser, and worked on windows.

So, I decided to write my own with node.js and socket.io.  Luckily socket.io made this so easy that I was able build it in just under an hour.  The clients just listen for a socket event from the controller to load a jasmine test page in an iframe.  Once the page is done, it sends an event with the results back to the controller.  Take a look at the code:

Here is the HTML:

<html>
   <head>
   <script type="text/javascript" src="http://localhost:8000/socket.io/socket.io.js"></script>
   <script type="text/javascript" src="http://code.jquery.com/jquery-1.6.2.min.js"></script>
   <script type="text/javascript" src="jasmine-test-client.js"></script>   </head>
   <body>
      Controller/Client:
      <select id="type">
         <option></option>
         <option>Client</option>
         <option>Controller</option>
      </select>
      <div id="client" style="display:none">
         Browser: <input type="text" /><br>
         <iframe src="" width="200" height="500"></iframe>
      </div>
      <div id="controller" style="display:none">
         Test Url: <input type="text"/>
         <button type="button">Run Tests</button>
         <div class="results"></div>
      </div>
   </body>
</html>

Here is the client javascript:

function detectBrowser(){
   /** Omitted for clarity **/
};

var socket = io.connect('http://localhost:8000');

var setupClient = function(){
   var iframe = $("#client iframe");
   var browser = $("#client input").val(detectBrowser())
   
   socket.on('refresh', function (data) {
      iframe.attr("src", data);
   });

   iframe.load(function(){
      var sendResults = function(){
         var results = iframe.contents().find("#TrivialReporter .runner a.description").text();
         socket.emit('result', { browser: browser.val(), results: results });
      }

      var counter = 0;

      var checkdone = function () {
          if ( iframe.contents().find('span.finished-at').text().length > 0) {
              clearInterval(timer);
              sendResults();

          } else {
              counter += 500;
              if (counter > 8000) {
                  clearInterval(timer);
              }
          }
      }
      var timer = setInterval(checkdone, 500 );
   });

   $("#client").show();
};

var setupController = function(){
   var refreshButton = $("#controller button");
   var results = $(".results");

   refreshButton.click(function(){
      results.empty();
      socket.emit('runtests', $("#controller input").val());
   });
   
   socket.on('result', function(data){
      var result = $("<span/>").text(data.browser + ": " + data.results);
      if(data.results.indexOf('0 failures') !== -1){
         result.css('background-color','#dfd');
      }
      else{
         result.css('background-color','#fdd');
      }
      results.append(result).append("<br>");
   });
   $("#controller").show();
};

$(document).ready(function(){
   $("#type").change(function(){
      var dropdown = $(this);
      dropdown.attr('disabled','disabled');
      if(dropdown.val() === 'Client')
      {
         setupClient();
      }
      else
      {
         setupController();
      }
   });
});

The detectBrowser function is here (modified from a comment on jQuery documentation site):

function detectBrowser(){
   var userAgent = navigator.userAgent.toLowerCase();
   $.browser.chrome = /chrome/.test(navigator.userAgent.toLowerCase());
   var version = 0;
   var browser = '';

   // Is this a version of IE?
   if($.browser.msie){
   userAgent = $.browser.version;
   userAgent = userAgent.substring(0,userAgent.indexOf('.'));
   version = userAgent;
   browser = 'Internet Explorer';
   }

   // Is this a version of Chrome?
   else if($.browser.chrome){
   userAgent = userAgent.substring(userAgent.indexOf('chrome/') +7);
   userAgent = userAgent.substring(0,userAgent.indexOf('.'));
   version = userAgent;
   browser = 'Chrome';
   }

   // Is this a version of Safari?
   else if($.browser.safari){
   userAgent = userAgent.substring(userAgent.indexOf('safari/') +7);
   userAgent = userAgent.substring(0,userAgent.indexOf('.'));
   version = userAgent;
   browser = 'Safari';
   }

   // Is this a version of Mozilla?
   else if($.browser.mozilla){
   //Is it Firefox?
      if(navigator.userAgent.toLowerCase().indexOf('firefox') != -1){
      userAgent = userAgent.substring(userAgent.indexOf('firefox/') +8);
      userAgent = userAgent.substring(0,userAgent.indexOf('.'));
      version = userAgent;
      browser = 'Firefox';
      }
      // If not then it must be another Mozilla
      else{
      }
   }

   // Is this a version of Opera?
   else if($.browser.opera){
   userAgent = userAgent.substring(userAgent.indexOf('version/') +8);
   userAgent = userAgent.substring(0,userAgent.indexOf('.'));
   version = userAgent;
   browser = 'Opera';
   }
   return browser + ' ' + version;
}

The server side looks like this:

var io = require('socket.io').listen(8000);

// open the socket connection
io.sockets.on('connection', function (socket) {

   socket.on('runtests', function (url) {
      socket.broadcast.emit('refresh', url);
   });
   socket.on('result', function (data) {
      socket.broadcast.emit('result', data);
   });

});

Now, all you have to do is run the server file using node.js, host the html page using IIS Express, and open the page in the browsers that you want to test.

Here is what the controller looks like:

Not the best option and not the cleanest code, but certainly a lot of value for the simplicity.

Implementing a WebSocket handshake in C#


I decided to implement a web socket server in C# to learn more about the protocol.  Since the protocol is constantly changing, I imagine that this will be out of date very quickly.  At least that was the case when I was trying to find examples.

The version of the protocol that I am going to implement is used by Chrome 13, specifically version 13.0.782.112 m.  The version of the web socket spec that Chrome 13 uses is “draft-hixie-thewebsocketprotocol-76”.  The details of the server’s response is located in section 5.2 of the documentation.

The following javascript will initiate a request:

var socket = new WebSocket('ws://localhost:8181/');
socket.onopen = function () {
    alert('handshake successfully established. May send data now...');
};
socket.onclose = function () {
    alert('connection closed');
};

The request looks as follows:

GET / HTTP/1.1
Connection: Upgrade
Host: example.com
Upgrade: WebSocket
Sec-WebSocket-Key1: 3e6b263  4 17 80
Origin: http://example.com
Sec-WebSocket-Key2: 17  9 G`ZD9   2 2b 7X 3 /r90

WjN}|M(6

The algorithm is explained fairly clearly in the documentation and is probably best explained by sample code.

    class Program
    {
        private static readonly string _serverUrl = "ws://localhost:8181/";

        static void Main(string[] args)
        {
            var listener = new TcpListener(IPAddress.Loopback, 8181);
            listener.Start();
            using (var client = listener.AcceptTcpClient())
            using (var stream = client.GetStream())
            {
                List requestList = new List();

                //wait until there is data in the stream
                while (!stream.DataAvailable) { }

                //read everything in the stream
                while(stream.DataAvailable)
                {
                    requestList.Add((byte)stream.ReadByte());
                }
                //send response
                byte[] response = GenerateResponse(requestList.ToArray());
                stream.Write(response, 0, response.Length);
            }
            listener.Stop();
        }

        public static byte[] GenerateResponse(byte[] request)
        {
            //extract request token from end of request
            byte[] requestToken = new byte[8];
            Array.Copy(request, request.Length - 8, requestToken, 0, 8);

            string requestString = Encoding.UTF8.GetString(request);
            StringBuilder response = new StringBuilder();
            response.Append("HTTP/1.1 101 WebSocket Protocol Handshake\r\n");
            response.Append("Upgrade: WebSocket\r\n");
            response.Append("Connection: Upgrade\r\n");
            response.AppendFormat("Sec-WebSocket-Origin: {0}\r\n", GetOrigin(requestString));
            response.AppendFormat("Sec-WebSocket-Location: {0}\r\n", _serverUrl);
            response.Append("\r\n");

            byte[] responseToken = GenerateResponseToken(GetKey1(requestString), GetKey2(requestString), requestToken);
            return Encoding.UTF8.GetBytes(response.ToString()).Concat(responseToken).ToArray();
        }

        public static string GetOrigin(string request)
        {
            return Regex.Match(request, @"(?&lt;=Origin:\s).*(?=\r\n)").Value;
        }

        public static string GetKey1(string request)
        {
            return Regex.Match(request, @"(?&lt;=Sec-WebSocket-Key1:\s).*(?=\r\n)").Value;
        }

        public static string GetKey2(string request)
        {
            return Regex.Match(request, @"(?&lt;=Sec-WebSocket-Key2:\s).*(?=\r\n)").Value;
        }

        public static byte[] GenerateResponseToken(string key1, string key2, byte[] request_token)
        {
            int part1 = (int)(ExtractNums(key1) / CountSpaces(key1));
            int part2 = (int)(ExtractNums(key2) / CountSpaces(key2));
            byte[] key1CalcBytes = ReverseBytes(BitConverter.GetBytes(part1));
            byte[] key2CalcBytes = ReverseBytes(BitConverter.GetBytes(part2));
            byte[] sum = key1CalcBytes
                        .Concat(key2CalcBytes)
                        .Concat(request_token).ToArray();

            return new MD5CryptoServiceProvider().ComputeHash(sum);
        }

        public static int CountSpaces(string key)
        {
            return key.Count(c =&gt; c == ' ');
        }

        public static long ExtractNums(string key)
        {
            char[] nums = key.Where(c =&gt; Char.IsNumber(c)).ToArray();
            return long.Parse(new String(nums));
        }

        //converts to big endian
        private static byte[] ReverseBytes(byte[] inArray)
        {
            byte temp;
            int highCtr = inArray.Length - 1;

            for (int ctr = 0; ctr &lt; inArray.Length / 2; ctr++)
            {
                temp = inArray[ctr];
                inArray[ctr] = inArray[highCtr];
                inArray[highCtr] = temp;
                highCtr -= 1;
            }
            return inArray;
        }
    }

Pushing SharePoint’s Limits: How many unique users can SharePoint really handle? – Part 1


I recently had a client that is using SharePoint 2007 as a portal to distribute content to a large audience.  Not only is the audience large, but it is diverse.  Each user will see different content in the portal based on the roles that they serve.

The problem is… there are lots of roles.  Thousands of them.  Now everybody knows that SharePoint starts to croak at about 2000 items in a list.  This also holds true for the number of unique Security Principals in the site.  This obviously causes an issue when you want to break permissions on lots of items in a list, because you will end up with way too many Security Principals on the site.

Luckily, the 2000 item limit can be easily broken if you are utilizing custom interfaces to lists.  The limits are associated with the default views that come from Microsoft.  I learned all of this during a presentation by Eric Shupps, a SharePoint MVP in the Dallas/Ft. Worth area.  Most of the content of the presentation is found on his blog here: http://www.binarywave.com/blogs/eshupps/Lists/Posts/Post.aspx?ID=188.

In my situation, I needed to know how SharePoint would handle large lists with unique permissions added into the mix.  Specifically, at what point and why does SharePoint break down with a large number of unique permissions in a list.

To find out, I created a console app to measure the time it took to do the following:

  • Add a new item to the list
  • Assign unique permissions to the list
  • Query the first page of 100 items from the list using SPQuery

In order to test this, I created a new web app with a blank site collection and a new content database.  I connected it to custom membership and role providers that allow me to have thousands of users.  I also used a variation of the technique talked about here: http://sladescross.wordpress.com/2010/03/19/item-level-permissions-performance-problem/.  This allowed me to reduce the amount of time it takes to break permissions.

In my first test, I used a pool of 2,000 users and added 8,000 items to a simple list.  Each item in the list had unique permissions assigned to it.  The first 2,000 items all had a new user who did not have any permissions on the site, so after that point, there were no new users being added to the site.  Here is what happened:

image

You will notice a couple of things.  Once I reached 2,000 items, the time to break permissions significantly dropped and flat lined.  This was also the time where I stopped adding new users to the site, and started assigning items to users who already had permissions on the site.  The other thing you will notice is that the time to query the list jumped at 1000 items, and began increasing linearly.

In my next test, I wanted to see how the time to query the list was affected by the number of unique users on the site, if at all.  To do this I ran it for the following 1 user, 1,000 users, 2,000 users, and 12,000 users.

image

From this, you can see that the total number of users on the site does not have a significant impact on the time to query the list.  It is strictly about the number of items that have unique permissions, not the total number of different users on the site.  You can see the time to query the list when permissions are not broken as a reference point.

If you remember from the first chart, the time to query the list significantly jumped at 1,000 items.  I was curious about the consistency of that number.  Here are the times to query the list for the first 2,000 items with unique permissions.

image

You can clearly see that at 1,000 items the time to query the list significantly increases no matter how many unique users there are.

However, the biggest problem is the amount of time it takes to add a new user to the site.  This can get very large very quickly.  Take a look at the amount of time it takes to add a user when you start getting near 12,000 users.

image

You are getting to a point where it takes more than 5 seconds to add a new user to the site.

How can that time be reduced?  Is there a way to pre-add users to the site?

In the next part of this post, I will be examining ways to pre-add users to the site, so that new items can be added more quickly.  Also, I am curious how lists scale horizontally.  I have shown that the time to query a list jumps at 1000 items, but what if I create 100 lists with 500 items each?

SharePoint 2010 Development VM: Part 9-Compacting your images for distribution


Overview

  1. Create Server Core VM and setup Active Directory Services
  2. Install and prepare Server 2008 R2
  3. Install unconfigured SQL binaries
  4. Install unconfigured SharePoint binaries
  5. Sysprep using MySysprep2
  6. Setup AD and configure Group Policy
  7. SQL install script
  8. SharePoint install/configure script
  9. Compacting your images for distribution

Now that I have an image, I like to make it as small as possible for distribution and storage.  To do this, I do 3 things: defragment, zero empty space, and compress.

In order to do this process, I created a WinPE ISO and booted into it with my VM by following the instructions here: http://technet.microsoft.com/en-us/library/cc749311(WS.10).aspx

I added MyDefrag and the System Disk Monthly script (http://www.mydefrag.com/) to defrag the hard drive, and sdelete (http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx) to zero the free space on the image.

Once you finish running those two on your image, you can execute the following command from the command prompt to tell virtualbox to make a copy of it and remove the free space from the VDI file:

vboxmanage clonehd <sourcevdifilename> <destinationvdifilename>

I use 7zip (http://www.7-zip.org/) to compress my VDI.

SharePoint 2010 Development VM: Part 8-SharePoint install/configure script


Overview

  1. Create Server Core VM and setup Active Directory Services
  2. Install and prepare Server 2008 R2
  3. Install unconfigured SQL binaries
  4. Install unconfigured SharePoint binaries
  5. Sysprep using MySysprep2
  6. Setup AD and configure Group Policy
  7. SQL install script
  8. SharePoint install/configure script
  9. Compacting your images for distribution

    The last step is to configure SharePoint.

    In order to do this, I modified the AutoSPInstaller from CodePlex found here: http://autospinstaller.codeplex.com/.  The original script was designed to install SharePoint from scratch, and I just needed the parts to configure an installed instance.

    I chose to go with a minimal install of SharePoint to keep the development VMs lightweight.  I’m just setting up the minimum service applications.

    Also, the script creates a new site collection at http://sp, so you will need to edit your HOSTS file and add ‘sp’ to point to localhost.

    The scripts were rather long, so I decided not to include them in the post.  Here is a link to download them from my SkyDrive.

    http://cid-f23e38ad86d1dc3a.office.live.com/embedicon.aspx/Public/SharePoint%20Install%20Script.zip

    The zip contains 3 files, SetInputs.xml, Launch.bat, and AutoSPInstaller.ps1.  I find that I need to use “Run as administrator” when running the Launch.bat or I get an error about setting the execution policy.

    Conclusion

    Now, I create an ISO image using ImgBurn that includes both the SQL and SharePoint install scripts, and I distribute this with my base VM images.