Speednet's Blog

Implementing ":Nth-child" CSS selector that works with IE7 and IE8

Wow, it's been almost 2 years since I wrote my last technical blog entry!  I guess the busier things get, the harder it is to set aside time to document things and "give back" to the developer community.

So I guess since it's been so long I should first mentioned that this is Todd, using my Speednet account at Lottery Post.  I write technical blog posts using this user account from time-to-time (sometimes waiting 2 years or so, *ahem*) in order to share technical thoughts and articles with other Web developers.

I strongly believe in the concept of "giving back" knowledge that is learned through great difficulty, because it will encourage others to do the same.  The end result is that the Web becomes a better place for all the sharing.

Getting to the topic at hand, one of the things developers constantly struggle with is that they want to develop Web sites using all the technologies and methods found in the latest Web browsers, but oftentimes doing so means that people using older Web browsers either cannot see or use the page/feature, or else it's buggy or has worse functionality.

It is very common in Web development to have a minimum browser version to develop for, meaning that all pages are supposed to work and look the same in the minimum (worst) browser specified, as well as all browsers that are better than that.

Up until this year, that "worst" browser has typically been Internet Explorer version 6 (IE6).  By today's standards, it's an absolutely horrendous browser — extremely buggy, unsecure, and lacking features.

But, thanks to all the emphasis made in the industry to get people off of IE6, the percentage of people using IE6 has dropped to a very small number.  (At Lottery Post that number is about 1.5% of all visits to the site.)  Thus, most developers have punted IE6 as the minimum specification, and are now focused on IE7 as the minimum browser to support.

Lottery Post has likewise punted on IE6 and I focus testing on IE7 and above.  IE6 still works in most cases, but I almost never test anything using IE6.  And I have posted a warning message for over two years on Lottery Post that is visible to IE6 users, telling them that their browser is no longer supported.  (Frankly, IE7 users having been seeing a similar message as well, and eventually I'll be dropping support for that version too — but not quite yet.)

As a developer, IE7 is definitely much better than IE6, as far as supporting various important Web technologies, but it is still far from being considered "modern".  So it is still tricky to develop in a cross-platform manner, in which one set of code works exactly the same for all supported browsers. 

That's why a topic like this one is important:  I have come across a technique for designing a piece of CSS code that will work even in IE7, although IE7 typically doesn't support it.

I was trying to specify CSS rules in a Web page using the :nth-child pseudo-class, in order to set styles on table cells according to their order in the row.  For example, to style the third cell in each row of a table that has a class name of "stats", I could use an expression like this:

table.stats tr td:nth-child(3) { color: red; }

In the latest Web browsers, like IE9, Google Chrome, Firefox, Safari, etc., it works perfectly:  each row's third cell has red text.  But, it you try looking at it with IE7 or IE8, the text is not red because the unsupported :nth-child pseudo-class is ignored.

So I just happened to be flipping through one of my favorite CSS books (The Ultimate CSS Reference, by Tommy Olsson & Paul O'Brien), and although it did not present this solution, it provided what I needed to figure it out.

The trick is to use CSS selectors to IE7 and IE8 can process, and string them together.

Instead of quickly jumping straight to the "nth child", you first find the first child element using the :first-child pseudo-class, and then use the adjacient child selector (+) until you reach the child you wanted to style.

Using that technique, which will work in all browsers including IE7 and IE8, the expression changes to:

table.stats tr td:first-child + td + td { color: red; }

It seems pretty simple when you see it, but until you actually try it, CSS developers who are accustomed to CSS expressions constantly driving down a stack of elements (instead of across elements) may not even think it would work.  But it does!

This expression does indeed work like the :nth-child pseudo-class, but not exactly.  If you want it to precisely mimic :nth-child you need to use a universal selector instead of an element name.  For example:

table.stats tr *:first-child + * + td { color: red; }

However, using the previous expression with the element name selectors is better in the case of the table.  You would use the universal selectors if you are unsure what the child elements will be, or what their order will be.

Of course, there are a couple minor exceptions when this technique will not work, as is always the case when programming for IE7.

  • You cannot dynamically insert elements before the first child element, because IE7 will not re-calculate the first child after the page is rendered.
  • You cannot have HTML comments inserted anywhere before the first child or between the adjacient child elements, because IE7 counts it as one of the child elements.

Other than these caveats, you're pretty much good to go.

I hope this helps a developer out there!

Entry #24

Fixing a long-standing issue with TinyMCE

I do a lot of development work with the TinyMCE text editor, since it's the one that powers all the rich text editing capabilities at Lottery Post.  TinyMCE is pretty much one of the most popular JavaScript-based text editors in use today.

I wanted to pass along a solution to a problem I finally solved today, for other developers who also use TinyMCE in their development projects.

The problem is that in Internet Explorer, when you hover your mouse over the blank text editor (no content currently in the editor), instead of the nice I-beam mouse cursor, you still see the arrow pointer.

In all other browsers you see the proper I-beam.

Why is this?  First of all, the text editor is basically an HTML document loaded into an iframe.  It contains all the normal HTML document structure as a Web page, only you're able to edit this particular Web page.  In IE, the body tag does not automatically extend to 100% width and height, whereas in other browsers it tends to inherit the full width and height of the viewport.  Further complicating things is that the HTML tag is an actual DOM element in IE, and the body tag seems to adapt to the dimensions of that HTML element.

As a result, even if you set cursor: text on the body tag within the editor, you won't see the I-beam in IE, because the body tag's width and height are zero. (Or maybe just the height of one line of content.) Because all other browsers automatically extend the body tag dimensions to 100%, you'll see the I-beam in those other browsers.

So, I knew from previous experience that the trick is to set 100% height on BOTH the html tag and body tag, like this:

html { height: 100%; }

body {
   height: 100%;
   cursor: text;
}

(Note, I also included the cursor: text definition in the body tag that forces the I-beam to appear.)

But there's a problem with doing that in TinyMCE. Because the body tag contains top and bottom padding, the height of the body tag ends up becoming 100% plus 6 pixels (given a top and bottom padding of 3px). As a result, the scroll bar shows up on the right, ruining the editor appearance.

In non-IE browsers, the answer to this would be pretty simple: Set the box sizing model for the body tag to border-width, and the body tag height would become 100% exactly, because the height calculation would include the padding. But, of course, IE versions 7 and lower do not have support for the box sizing model, so that won't work.

This got me to thinking about any possible IE-only browser capabilities that could help to get that body tag to exactly 100%, including the padding. Dusting off my ancient CSS knowledge, I remembered the IE-only expression() capability, which lets you use JavaScript code within CSS.

So I changed my CSS to the following:

html { height: 100%; }

body {
   height: expression(
       (document.documentElement.clientHeight - 6)
       + "px");
   cursor: text;
}

This would seemingly solve my problem perfectly, because the 100% height set on the html tag would not be a problem for any other browser, since no other browser recognizes it as a real DOM element, and the expression() set on the body tag's height would also be ignored by all other browsers.

So I fired up my test editor, and voila — it worked! I could hover my mouse over the blank text editor and see the I-beam all everywhere the mouse moved. I tested it in other browsers — fine there too.

Then I used the resize handle on the text editor to change the editor's size, and oops — the body tag height was frozen at whatever the initial height was. The editor size was changing just fine, but the body tag stayed a fixed height.

This didn't make sense — I thought expression() was supposed to be a dynamic property that would constantly refresh itself. After all, so many performance books and blogs complain about how expression() should be avoided because it slows everything down from constant refreshing.

So I did some more research and discovered that when IE first interprets the expression, it makes a determination of whether the expression will always return a fixed value (a static value) or if the value will change over time (a dynamic value). I guess this is a performance tweak to make expressions faster.

The way I had coded my expression apparently made IE think that the value was static, so it was only evaluated upon initial page load, and never refreshed after that.

I could not locate a set of hard-and-fast rules for forcing an expression to become dynamic, so I tinkered with it a bit and found that including some kind of test condition would do the trick.

There are indeed many things you can put in the expression that will slow performance to a crawl. For example, you can assign variables, do conditionals (e.g., test? one : two), etc., but they all kill performance because they get executed so many times.

But I did find a dynamic expression that seems to evaluate very fast, and does not slow performance significantly. (IE is slow anyway, but I digress...)

Simply testing for the existance of the window object — something that will always be there — was enough to force a dynamic value, while remaining very fast.

So my final CSS that dyanimcally updates the body tag height as the user changes the editor size is as follows.  (I included some other code in the body tag style definition to ensure all other browser render perfectly.)

html { height: 100%; }

body {
   box-sizing: border-box;
   -moz-box-sizing: border-box;
   -ms-box-sizing: border-box;
   -webkit-box-sizing: border-box;
   height: 100%;
   height: expression((window) &&
       ((document.documentElement.clientHeight - 6)
       + "px"));
   cursor: text;
}

Simply insert this CSS code at the top of the content.css file in the skin folder you're using, and IE will give you the proper behavior for mouse-over.  I tested it in IE 6, 7, and 8, and all work fine.  I also tested it in the IE 9 preview, and it didn't work (because it no longer supports expressions), but there's still plenty of time to deal with that.

I hope this this tip will help alleviate some long-standing headaches for TinyMCE devs!

Entry #23

New jQuery Watermark plugin released

I have just finished publishing my latest free plugin, called Watermark plugin for jQuery.

The new project has been released on the Google Code site, and the link to the project home page is: https://code.google.com/archive/p/jquery-watermark/ 

The plugin makes it simple to add those little light gray tips inside text entry spaces on a Web page.  For example:

Sample watermark

The minified version of the code, which is included in the download package, adds less than 2,000 bytes to the page size, so the effect of the code is practically non-existent.

The project home page has an issue tracking system, so please report any bugs found.

UPDATE: Google has terminated their code project website, so the link above points to their long-term storage of the project.  You can download source code there, as well as view all the project documentation.  However, the project is no longer being actively updated.

Entry #22

Nifty file utility that every developer can use

Here's a scenario faced by practically every developer at one time or another:  you want to delete a file, but it refuses to delete because Windows sees the file as "locked" by another user.

Sometimes this can be a real file lock, when a user on a different computer does actually have the file open, or sometimes it can be file-locking "junk" left over when a computer that did have a lock crashed or suddenly closed an application.

There has never been a simple "unlock" command to get rid of the file lock (in order to delete the file), no matter why it's being locked, leading to a cavalcade of utilities to work around the problem. 

The classic utilities that you'll come across when searching for keywords like "force delete" are programs that delete the locked file at the next boot.

But what a pain in the neck that is!  I refuse to install one of those programs, because I always have many programs running, and a fresh boot, done merely to delete a file, is just too painful.

Over the years this issue has been a major gripe with me, which is why I was so happy to finally find a good solution this evening.  And the thing is that the utility has been around since 2006.

The utility is called PsFile, and is a part of the brilliant SysInternals suite of utilities created by Mark Russinovich. 

Frustratingly, I've looked through SysInternals in the past seeking this same solution, but I always overlooked PsFile because the option to do the unlocking is so subtlely mentioned in the documentation.  It comes off as a utility merely to see who has a file open — not a utility to actually close the file/lock.

In my frustration of trying to unlock a file this evening, I had downloaded PsFile just to give me an idea of who/what was holding the lock.  I ran the program and saw the file lock, but then looking back at the documentation I noticed a little "-c" command line option.  The documentation states, "-c : Closes the files identified by ID or path."  Wow!

I gave the option a try, specifying the ID number of the file (which was shown simply by typing "PsFile" at the command prompt), and the file was instantly closed.  Awesome!

The exact syntax of the unlock is:

PsFile 1234 -c       (1234 is replaced by the actual file ID, or can be a path name)

The PsFile utility can be found (and downloaded) here: http://technet.microsoft.com/en-us/sysinternals/bb897552.aspx

One of the things I love about all the SysInternals utilities is that there is no installation needed.  Just unzip the download and copy the included .exe file to your hard disk, and then run it.

SysInternals is extensive and indispensable, and its home page can be found here: http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx

I hope this helps anyone who's experienced a similar problem!

Entry #21

ASP.NET AJAX: Replacement CSS add/remove functions

I came across a major performance issue in the Microsoft ASP.NET AJAX client library (JavaScript library) today.

Because the nature of the issue is engrained in the client library code, the only way around it was to completely abandon use of the functions that add, remove, and test for the existence of class names assigned to DOM elements, and develop my own.

Specifically, the following functions are involved/affected:

  • Sys.UI.DomElement.addCssClass
  • Sys.UI.DomElement.removeCssClass
  • Sys.UI.DomElement.containsCssClass
  • Sys.UI.DomElement.toggleCssClass

In program code that changes only a few classes here and there, nobody would ever notice performance problems, but when iterating over a large set of DOM elements .... well, that's a different story.

In the process of investigating the performance issues, I took a look at the code behind those functions.  I didn't do a bunch of stand-alone, isolated tests to prove my theory (because frankly I don't have the time to do it!), but I believe the problem lies in the containsCssClass function. 

Internally, almost every other CSS function calls containsCssClass, so because containsCssClass has a performance issue, it ripples through the other functions as well.

The containsCssClass function does a lot of unncessary work just to find out if a particular element contains a specific class name.  It performs a JavaScript split call on the string, then goes through the newly-created array one element at a time to look for a match.  But in doing that, it makes various function calls, performs redundant typeof checks — and performs a case-sensitive test on the class name!

So if you perform many addCssClass calls in a loop, you're actually performing lots and lots of expensive calls involving dynamically-created strings and arrays, as well as unnecessary type- and bounds-checking, hurting not only performance, but memory usage.

The case-sensitive matching is also a problem for me, because CSS class names are not case-sensitive by definition.  The class name "MyClass" should be able to match with "myclass", but it would not match when using addCssClass, so you could end up having multiple copies of the same class name in a DOM element.

removeCssClass also suffers from case-sensitive matching.  Not good.

Another potential problem I found when looking at the Microsoft code is that it does not account for the possibility that a class name appears in the element more than once.  It may be a mistake on the part of a developer to do that, but it happens, and it should be accounted for.

For example, given the class name "customer gold vip gold", if you execute the function call Sys.UI.DomElement.removeCssClass(element, "gold"), the new class name will become "customer vip gold".  See how the class "gold" is still included in the string, even though you removed it?  That's because only the first instance of "gold" was removed.

Last, I hope Microsoft does something about those lengthy namespaces.  Having the CSS class manipulation functions nested 4-levels deep in a namespace hierarchy is just plain inefficient and verbose.  Maybe they will fix that problem in .NET 4.0.

Function Rewrites

Someday — probably not too distant in the future — I'll be converting much of my code to take advantage of jQuery, but until that time, the solution for this type of problem is to rewrite the code myself.

The end result of my efforts were very easy to measure:  a loop that modifies many class names, which was taking 15 seconds to run, now takes 1 second to complete. 

I also threw in an extra function that allows class names to be removed with a regular expression (Regex) instead of a fixed string.  (This is useful in cases such as when you want multiple class names removed.  It can now be accomplished with one function call.)

My new program code for CSS manipulation is shown below. 

In the code, I have wrapped the functions in a global object called "DOM", so calling the functions is accomplished like this:  DOM.addCssClass(...).

Of course, the global object ("DOM") can be called anything.  I personally like using short names for heavily-used global objects, because it keeps code tighter and more legible.

var DOM = {

    addCssClass: function (element, className, noTest) {
        if ((noTest) || (!DOM.containsCssClass(element, className))) {
            element.className = (element.className + " " + className).trim();
        }
    },

    containsCssClass: function (element, className) {
        return ((" " + element.className.toLowerCase() + " ").indexOf(" " + className.toLowerCase() + " ") >= 0);
    },

    removeCssClass: function (element, className, noTest) {
        DOM.removeCssClassRegex(element, new RegExp("(^| )" + className + "($| )", "gi"), noTest);
    },

    removeCssClassRegex: function (element, classRegex, noTest) {
        if ((noTest) || (classRegex.test(element.className))) {
            element.className = element.className.replace(classRegex, " ").replace(/\s{2,}/g, " ").trim();
        }
    },

    toggleCssClass: function (element, className) {
        if (DOM.containsCssClass(element, className)) {
            DOM.removeCssClass(element, className, true);
            return false;
        }
        else {
            DOM.addCssClass(element, className, true);
            return true;
        }
    }
}

The above code is also available for download.

Designed for Speed

A few key points about the design of the new CSS class name manipulation code:

  • There are built-in ways to ensure that the testing of values and class names is only done once per call.  In addCssClass, setting the noTest argument to true signifies that the class name definitely does not exist in the element, and the containsCssClass function call is skipped.  In removeCssClass, the noTest argument set to true indicates that the class name does exist, without the need to test again.
  • containsCssClass does its testing using a simple indexOf string function, which is very fast.  It also performs a case-insensitive test, which, as stated above, is an important omission on Microsoft's part.
  • removeCssClass now removes all instances of the given class name, not just the first one.  It accomplishes the removal with a very simple regular expression (Regex), which is also efficiently used to test for the existence of the class name (if noTest is not true).  Sometimes an efficient regular expression pattern can outperform a bunch of string manipulation, depending on what you're trying to do.
  • A "bonus" function was added, removeCssClassRegex, which can remove class names by specifying a regular expression instead of a fixed string.  For example, to remove both "gold" and "vip" class names from an element, you would call: DOM.removeCssClassRegex(element, /(^| )(gold|vip)($| )/gi);

Are there ways of making the code even faster?  Possibly.  But probably not by the orders of magnitude that I've just increased it by.  Also, one has to balance performance with clarity of the code, which can sometimes suffer when the only objective is performance.

Generally speaking I like what Microsoft has done with its ASP.NET AJAX client library.  It's a pretty amazing achievement.

At the same time, it's always good to recognize the parts that need improvement.  Microsoft is doing a good job of listening to its community of developers these days, so I'll continue trying to point out things like this whenever I can, in the hope that they get noticed and fixed.

Entry #20

Supporting Google Chrome with Browserhawk

Lottery Post uses Browserhawk (from Cyscape) to detect browser capabilities, so if a user's web browser does not meet the minimum standards required by the site, they can be shown a detailed message page, with a description of exactly what is under-functional.

When the new Google Chrome Web browser was released in the past week, Browserhawk did not detect the new browser properly (and still does not), so users of the new browser had problems logging in to the site.

Since Chrome uses the Apple WebKit open source codebase as its foundation, Browserhawk mistakenly detected the browser as Safari 1.0.

To remedy the situation — at least until Cyscape updates their browser definitions — I have re-programmed the main browser definition file to detect the Chrome browser as Safari 3.1.  (Under the covers the Chrome rendering engine is Safari 3.1.)

For anyone else who uses Browserhawk for browser detection, adding Chrome to the browser definitions is fairly straight-forward:

  1. Open the Browserhawk Editor.
  2. Select File, Open..., and then select maindefs.bdf and click Open.
  3. Right-click on the Safari folder (in the left folder/browser list), and click Add... in the context menu.
  4. In Browser Description enter Chrome.
  5. In Identifying user agent string enter Mozilla/*AppleWebKit/[5-9][0-9][0-9]*Chrome*Safari*.
  6. Click OK.
  7. Change Majorver to 3.
  8. Change Minorver to 1.
  9. Change Version to 3.1.
  10. Change Fullversion to 3.1.
  11. Make sure the rest of the properties match the properties in the Safari v3 entry by clicking back and forth between Chome and Safari v3 — except for the last two properties.
  12. Set Versionposx to 0 [zero].
  13. Set Versionposstr to an empty string.
  14. Click the Save icon in the editor toolbar to finish the process.

To test to be sure Chrome is being properly identified as Safari 3.1, go to www.mybrowserinfo.com and click on See Detailed Location and Browser Information.

In addition to seeing that it is correctly identified, also be sure cookies and JavaScript are being detected as "enabled".

Entry #19

ASP.NET AJAX Client Library: Combining createDelegate and createCallback

When working with the ASP.NET AJAX client library, I find that I occasionally need to use both Function.createDelegate() and Function.createCallback() simultaneously.

Each time you use one or the other of these methods, the library places a "call wrapper" around your function.  Then, when the new delegate (or callback) is called, there are actually two calls being made: the first to the wrapper, and the second when the wrapper calls your function.

By combining the use of both methods, you're now placing two call wrappers around your function, which is fairly inefficient, not to mention that it's not exactly elegant or readable.

Why would you want to combine the two methods?

You'd want to combine them when you want to ensure that this refers to a specific object, and you also want to pass specific arguments ("context") to the function.

For example, let's start with two objects, each containing an array of messages (strings).   We also have a function that expects that this will refer to one of the objects, and it will expect to receive the index of the message to display as an argument.

(This is a very contrived example, but it shows the requirement and solution in simple terms.  Real-world situations that require the use of both createDelegate and createCallback are more complex.)

var myObject1 = {
    messages: ["Red", "Blue", "Black"]
};

var myObject2 = {
    messages: ["Audi", "Chevy", "Mitsubishi"]
};

function showAlert(index) {
    alert(this.messages[index]);
};

Now, I'll create two delegates/callbacks: one will call showAlert() and display a color, and the other will call showAlert() and display a car type.  I'll do this by combining the use of createDelegate and createCallback.

var showColor = Function.createDelegate(myObject1, Function.createCallback(showAlert, 2));

var showCar = Function.createDelegate(myObject2, Function.createCallback(showAlert, 1));

If we were to call showColor(), the user would see the word "Black" appear in a alert window.  Likewise, if we were to call showCar(), the user would see the word "Chevy" appear in a alert window. 

The significance of what took place here is that both showColor and showCar can be passed as simple function references to an event or method, and they retain not only the calling object context that we desired (this), but also the argument(s) that we needed to pass (the "context").  It allows us to use one single function (showAlert) to satisfy the display requirements of both objects (myObject1 and myObject2).

The problem with the showColor and showCar methods, as written above, is that they are inefficient because they make a total of three calls each time one is called (two wrapper calls plus the actual function call), and looking at the code, it can be difficult to understand what it is doing (i.e., the code lacks readability).

To solve both issues, I have created a new method called Function.createDelegateCallback().

In one step, we can specify a this reference, the function to wrap/call, and arguments ("context") to pass to the function.

The simplest way of creating the new method would have been to create a method that calls Function.createDelegate(this, Function.createCallback(func, args)), which would solve the readability issue, but would do nothing to solve the inefficiency issue.

Instead, I started with the source code for createCallback, and modified it to include the createDelegate functionality, making sure that only one wrapper call would be placed around the function.

Note: the new createDelegateCallback method is added directly to the JavaScript Function object, just like createDelegate and createCallback are today, so it is utilized in exactly the same way.

Update: After some additional testing, I have slightly modified the code below.  I am not sure as to the reasons for Microsoft coding the for loop they way they did in createCallback, but I accepted at face value that it was the best way.  I now believe it is better below, and my testing bears that out.  There is certainly a reason they coded it the way they did; I just can't figure it out.

Additional note: If you want to pass more than one argument to the target function, simply include them after the context argument.

Function.createDelegateCallback = function (instance, method, context) {
    /// <param name="instance" mayBeNull="true"></param>
    /// <param name="method" type="Function"></param>
    /// <param name="context" mayBeNull="true"></param>
    /// <returns type="Function"></returns>

    return function() {
        var l = arguments.length;

        if (l > 3) {
            var args = [];

            for (var i=2; i<l; i++) {
                args[i-2] = arguments[i];
            } 

            return method.apply(instance, args);
        }

        return method.call(instance, context);
    };
};

Now that we have the new createDelegateCallback method, we can re-write the methods above to make them more efficient and readable:

var showColor = Function.createDelegateCallback(myObject1, showAlert, 2);

var showCar = Function.createDelegateCallback(myObject2, showAlert, 1);

createDelegateCallback is not something you'll use every day, but if you do a lot of client-side coding with the ASP.NET AJAX client library, you will be glad someday that you have it.

Entry #18

Smart String Concatenation

Whether on a individual computer or a server attached to the Internet someplace, people are always looking for better performance of their software.

The first thought that comes to mind is often adding memory or a faster processor, or choosing a faster operating system or web browser.

But many times (perhaps most times) the real culprit for slow performance is the programmer who wrote the software.

As a programmer, my philosophy is that I personally take responsibility for the performance of my programs, rather than require faster hardware or environment.  I have found that by forcing myself to think about the efficiency of every piece of code I write, the combined performance savings across an entire application is immense.

Think about it: if you save just a quarter of a second of time serving one page view, you have actually saved about 70 hours of processor time after a million page views.  And for a site like Lottery Post that gets many millions of page views a month... well, you can do the math.

With that concept in mind, I'd like to offer some tips about string concatenation that will make your programs more efficient.

String concatenation — the processing of combining strings together — is something that is still slow on many popular platforms.  For example, on all versions of Internet Explorer (up until IE8 is released), combining strings is very inefficient.  Even in the .NET framework, there are some ways of programming that will result in poor performance.

String Concatenation in .NET

In the .NET framework, whether you're programming in C# or VB, the best way to combine strings is using the String.Concat() method.  However, just using String.Concat() is not enough, and here's where the extra efficiency comes into play:

Make sure that all the arguments passed to String.Concat() are strong-typed Strings.  Otherwise, an overloaded version of the String.Concat() will be used that accepts all Object arguments, and each value you pass will be boxed to an Object (when passed) and un-boxed from an Object (when concatenated by the method).

When you test the program, it works just fine either way, but if you are using the Object argument version of the String.Concat() method, you are silently leaking performance.

For example, if you want to create a string like "There are 27 pages", the second version shown below is more efficient:

myString = String.Concat("There are ", intPages, " pages")

myString = String.Concat("There are ", intPages.ToString(), " pages")

(Note:  I did not use CStr(intPages), I used intPages.ToString().  That's because the CStr() VB function accepts an Object type, which will require boxing/unboxing, whereas I believe an Integer can execute its ToString() method quicker.)

In the first example, all three arguments are treated as Object types, but in the second example, all three arguments are String types, so the compiler chooses the quicker all-String-arguments version of the String.Concat() method.

Why not use VB's string concatenation operator ("&")?  Because when you compile your application, the compiler breaks everything down into String.Concat() operations anyway, and you can do a much better job of that than the compiler could.  If you ever took a look at some of the code that gets generated for combining strings, you would be amazed at what is actually taking place under the covers.

And by all means, do not using String.Format() for your string concatenation, unless you are actually using the formatting capabilities of the method (i.e., transforming data into another format).  Even though it can produce slightly more readable code, String.Format() has lots of overhead that makes it a poor choice.

Programming efficiency

Sometimes if you think about the way the computer is executing the code, rather than how your mind is assembling the string, you can come up with some nice efficiencies.

For example, let's say you're creating the name of an image file based on the value of a couple different variables.

With the variable names shown in <brackets>, the file name will be "icon_<isCircle>_<size>_.<isGif>", where <isCircle> is a boolean (True for "circle", False or "square"), <size> is any integer (such as 16, for a 16 x 16 image), and <isGif> is a boolean (True for "gif", False for "jpg").

Here is how someone would normally code this (using the tips above for String.Concat):

myImage = String.Concat("icon_", If(isCircle, "circle", "square"), "_", size.ToString(), "_.", If(isGif, "gif", "jpg"))

or, in C#, the same thing would be:

myImage = String.Concat("icon_", isCircle? "circle" : "square", "_", size.ToString(), "_.", isGif? "gif" : "jpg");

It will indeed work fine, and for top efficiency it uses the all-String version of the String.Concat() method, but why should you force the computer to combine 6 different strings together, when you can do the same thing by combining only 3 strings together?

The following is functionally equivalent to the first solution, but takes half the work to do:

myImage = String.Concat(If(isCircle, "icon_circle_", "icon_square_"), size.ToString(), If(isGif, "_.gif", "_.jpg"))

or, in C#:

myImage = String.Concat(isCircle? "icon_circle_" : "icon_square_", size.ToString(), isGif? "_.gif" : "_.jpg");

If you were to go through your program code, how many times would you see something similar to the first solution, as opposed to the second?  Probably a lot.

JavaScript coding

The same exact approach applies to JavaScript — especially to JavaScript.

As I mentioned earlier, JavaScript is very slow performing string concatenation on IE web browsers, and IE makes up the majority of web browsers in use today.  That's a lot of potential for slow code.

Therefore, when writing JavaScript programs be careful anywhere you combine strings, especially when it's done inside a loop.

The reason behind the slow performance is that each time strings are concatenated, a new copy of  the string is created in memory, which requires allocating new memory, copying the contents of the old strings, and then releasing the memory from the old strings.

When doing a lot of string concatenation in JavaScript, it is often much better to create an array of string values, and then use an Array.join() method to combine them.  Each time a new element is added to an array, you're allocating memory for the new element, but you're not copying the old string value, and you're not releasing memory for the old string.

Here is an example comparing regular string concatenation with array-based concatenation.  The example is to create a string containing a comma-separated list of numbers 1 through 100.  The second method is much faster than the first.

Method 1: regular string concatenation

var str = "";

for (var i=1; i<=100; i++) {
    str += (i + ",");
}

Method 2: combine array elements

var a = [];

for (var i=1; i<=100; i++) {
    a[i-1] = i;
}

var str = a.join(",");

Lots of other efficiencies

There are lots and lots of other things that can be done to increase performance of strings and string concatenation.  This only scratches the surface.

Hopefully what this does is to show the types of things that you can think about when you're looking to increase the efficiency and performance of your programs.

I wouldn't go crazy doing things that make your code impossible to read, but at the same time don't be afraid to do things that save only a tiny bit of time.  As you implement lots and lots of small tweaks, eventually they will combine into a much bigger overall savings.

Entry #17

Three cheers for Microsoft! The web is about to improve big-time

I am so happy to report that Microsoft made an announcement this morning that its upcoming IE8 web browser will support all current web browser standards as the default browser behavior!

While I knew that IE8 was going to support the current set of web standards, Microsoft's plan to-date was to require web sites to add a special indicator to their HTML code that would "turn on" this support.

The result of that previous plan would have been a confusing mess of web sites, with many inexperienced web developers wondering why their sites did not render properly.

Also, it would have continued the practice of making old, outdated web browser standards the default experience, and once again the onus would be placed on users to figure out which browser to use for what site. 

This is going to be an exciting time for web developers, as they will finally be freed of the shackles imposed by IE6 and IE7 — as soon as enough people have migrated to IE8.

I've done some development work recently for the Safari 3.0 web browser, which currently supports many of the upcoming IE8 features, and it is just so incredibly liberating to use the new standards.

Features like rounded borders, multiple background images per object, CSS selectors based on attribute values, multi-column layouts, and much, much more.

The trick is going to be getting a critical mass of people upgraded to IE8 as quickly as possible.  Based on the adoption rate of IE7, that will be no easy task.  It was only within the past few months that I've noticed more people using IE7 than IE6.  Hopefully there will be a compelling reason to upgrade (for users, that is).

Here is a link to Microsoft's announcement:

<Sorry, link is no longer active>

Entry #16

Applying Vista SP1 caused VS 2008 compiles to fail (includes fix)

Being a Microsoft Developer Network subscriber, yesterday I was able to get my hands on the new Vista Service Pack 1 (SP1), which includes hoards of fixes and performance tweaks.

The install went well, but is slow.  It probably took an hour to install, but that was only after an hour to uninstall the SP1 Release Candidate that I had previously installed.

The biggest pain was having to tell my virus protection software that every file that changed on my PC was OK.  I wish there was an easy way to do that globally.

Anyway, the only anomaly I discovered was that Visual Studio 2008 started coming up with compile errors after applying SP1.  It indicated a problem in the web.config file, on these specific lines:

<compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4">
    <
providerOption name="CompilerVersion" value="v3.5"/>
    <
providerOption name="OptionInfer" value="true"/>
    <
providerOption name="WarnAsError" value="false"/>
</compiler>

The error message stated that the tag could not have nested child tags.  The catch-22 is that removing the child tags would remove all support for .NET 3.0 and 3.5 functionality, so that was not the solution.

This really had me stumped for quite a while, until I discovered this post:

http://forums.asp.net/t/1186941.aspx

I ran the two patches indicated in the post (rebooting after installing the first one), and that solved the problem.  What's going on Microsoft?  This is really strange.  I have no idea what those patches install, but they did the trick.

As far as SP1 is concerned, if you installed the Release Candidate, you already have a good idea of the performance improvements and increased stability you'll get.  The best part was getting rid of the annoying "Evaluation Copy" watermark on the screen.

Props to Microsoft for getting SP1 into its MSDN subscribers' hands before they said they would, but hopefully you've learned your lesson never to roll out a service pack like that again.  Treat your MSDN subscribers like first class citizens!

Entry #15

ASP.NET AJAX: Half-way between createDelegate() and createCallback()

I stumbled on a use of Function.createDelegate() method today that seems to be a cross between Function.createDelegate() and Function.createCallback().

I had the need to pass a context to a handler function, with the context being a JSON object containing the state of a few variables.  I tried using createCallback(), which is designed for that purpose, but it did not seem to work well because I was not using it together with a DOM event — just with an internal function call.

So I turned to createDelegate(), using the JSON object as the this argument, and it worked quite nicely.

An example of this syntax is as follows.  The code assigns a function to the variable fn that (when called) displays an alert box containing the element number and id of a DIV with an ID of "ThisOne".  There are certainly more useful uses for this method, but it should provide a clear example of how it works.  The key point here is the function() assignment that uses this to utilize the values passed in the delegate.

var a = document.getElementsByTagName("div");
var fn;

for (var x=0; x < a.length; x++) {
    if (a[x].id == "ThisOne") {
        fn = Function.createDelegate({
            e: a[x],
            num: x
        }, function () {
            alert("DIV #"+this.num+": "+this.e.id);
        });
    }
}

fn();

If you're not sure about the value of the above code, ask yourself, "How would I code the same thing without the createDelegate()?"  Also ask yourself how hard it would be to pass the DOM element itself to your function, as I have done with createDelegate().

Would you use the Function() constructor that allows you to create a dynamic function from a string value?  (Bad idea.)

Would you use global variables?  (Bad idea.)

If your answer is that you'd use apply(), then you're doing the same thing that createDelegate() does internally.  If you didn't think of that before you read this, then you can't claim credit for thinking of it.  ;-)

Entry #14

ChangeAll extension method alters every item in an iCollection(Of T)

My favorite feature of the new .NET 3.5 framework is extension methods.  Ever since I heard they would be available in this version of the .NET framework I have been designing functions in a way that they could be easily converted to extension methods.

For the unenlightened, extension methods are a new feature of the .NET 3.5 framework that allows you to extend any class — whether it is a built-in class, a third-party class, or your own class.

When you create an extension method, you can call the new method as if it were a native method of the class.

As an easy-to-understand example, I created an extension of the .NET String class called SuperTrim().  The method works like the built-in Trim() method, but instead of just trimming spaces from the front and back of a string, it trims any kind of control characters (any character with an ASCII code of less than 32, plus a few others).

After creating the extension method, I can apply SuperTrim() to an string in exactly the same way I would apply a regular Trim():

newString = oldString.SuperTrim()

Now that's cool! 

Last week I came upon a scenario in a program where I had to manipulate all of the items in a collection, so I tried using the built-in ForEach() method.  The trouble is that ForEach() is designed to do something with all the items in a collection, but not to change all of the items in a collection.

ForEach() passes each item, one by one, to your specified function, but it explicitly only passes the item by value, not by reference.  If your function changes the value passed, it is not changed in the collection — only within the scope of your function.

ForEach() also does not pass an index number or any other value that could help determine which item in the collection to change, so any attempt to use ForEach() to change items in the original collection would be a messy affair.

In response, I created an extension to iCollection called ChangeAll().  You pass a function to ChangeAll() that will receive each item, one by one, and the return value from your function will become the new value of that item in the collection.

Incidentally, ChangeAll() works great with the new lambda expressions — another wonderful .NET 3.5 feature!

Here is a sample usage of ChangeAll(), which trims all items in a collection (using a lambda expression):

myColl.ChangeAll(Function(value As String) Trim(value))

Pretty nice, eh?

If you don't want to use a lambda, you can just as easily use a regular function, like this:

Function TrimIt(ByVal value As String) As String
    Return Trim(value)
End Function

myCollection.ChangeAll(AddressOf TrimIt) 

There are lots of other uses, besides trimming every item in the collection.  The point is that now it is very easy to change each item of a collection, in a way that is simple and easy to read/understand — thanks to the new extension methods.

When creating ChangeAll() I first tried extending iEnumerable(Of T), but found that the base class does not have sufficient functionality to do what I needed.  So I then extended iCollection(Of T) instead, which can do some basic manipulation of the collection's items.

So without further adieu, here is the code, in its entirety.  This is the entire contents of a file I call "ColectionExtensions.vb" (you can give the file any name).  It includes inline documentation (yay!).

Option Explicit On
Option Strict On

Imports
System.Runtime.CompilerServices

Public Module CollectionExtensions

  ''' <summary>
  ''' Executes a transformation function on each item in
  ''' an ICollection(T) generic collection, replacing
  ''' each item with the return value.
  ''' </summary>
  ''' <typeparam name="T">
  ''' The type contained in Collection and that is passed
  ''' to ChangeFunction, as well as the type that must be
  ''' returned from ChangeFunction.
  ''' </typeparam>
  ''' <param name="Collection">
  ''' The collection on which ChangeFunction will be
  ''' applied to each item.
  ''' </param>
  ''' <param name="ChangeFunction">
  ''' Each item in Collection will be passed to this
  ''' function, and the return value will replace the
  ''' original item in the collection. If you wish an
  ''' item to remain unchanged, this function must
  ''' return the item's original value.
  ''' </param>
  ''' <returns>
  ''' Returns the collection, so this method may be
  ''' daisy-chained.
  ''' </returns>
  <Extension()> _
  Public Function ChangeAll(Of T)( _
    ByVal Collection As ICollection(Of T), _
    ByVal ChangeFunction As Func(Of T, T)) _
  As ICollection(Of T)

    If (Not Collection.IsReadOnly) Then
      Dim newCollection(Collection.Count - 1) As T

      Collection.CopyTo(newCollection, 0)
      Collection.Clear()

      For Each item As T In newCollection
        Collection.Add(ChangeFunction(item))
      Next

    End If

    Return Collection
  End Function

End Module

Obviously, all of the above is written in VB, but exactly the same functionality can be created in C# with a few tweaks in syntax.

I hope that this extension can be useful to you, but more importantly, that it can demonstrate the usefulness of the technique, so that you can similarly extend classes in your applications.

Entry #13

IE7 has now overtaken IE6

I am happy to be able to report the news that is the subject of this blog entry:  that IE7 has now apparently overtaken IE6 for browser share.

I base this information on the Active Users page here at Lottery Post, in addition to regular web site log analysis.

Over the past several weeks I have monitored consistent statistics showing at least 25% more IE7 users than IE6 users.  (Looking further down the chain, the IE5 user population is so small at this point that it's a mere blip, and certainly not worthy of breaking one's back to support.)

The Lottery Post user community is an excellent one to use in gaging browser penetration, because it is first-most a very large user population, but perhaps even more importantly, it is made up of mainly non-technical people.  (If technical people made up a large percentage of the Lottery Post site visits there would be completely unrealistic browser statistics, with a high percentage of Firefox users and much fewer IE6 users.)

The migration from IE6 to IE7 is a tremendous benefit to everyone, because that means more and more people are seeing pages as they are meant to be seen, and web site designers are inching closer to the day when they no longer have to jump through hoops to have every web user share the same experience.

An example of this can be seen on the (admittedly cool) Sudoku page I completed and implemented this week.  IE6 users can play the game, just like anyone else, but they are missing some cool button roll-over effects because their web browser is simply incapable of drawing PNG semi-transparent images together with CSS sprite techniques that are used for the roll-overs.

In fact, it was hard enough just to get IE6 to draw the PNG images with no roll-over effect!  To support the image format I had to use conditional comment tags and a special style sheet that would only be read by IE6.

The kind of limitations imposed on web developers by needing to support old browsers like IE6 not only is a waste of time for people like me, but also limits the amount of creativity I can employ, and reduces the number of cool, friendly features that I can implement.

Thus, I look at the shrinking IE6 population with hope and excitement.

Some of the surge in IE7 usage may be due to Windows Vista making some inroads.  (IE7 is the default web browser in Vista.)  I have been noticing a consistent figure of about 15% of the Lottery Post site vistors using Vista.

Like the migration from IE6 to IE7, the migration to Vista also represents a positive step for web designers, as they can count on each Vista user having excellent support for recent and emerging technologies, like Flash, fonts, graphics (like PNG graphics), etc. — as well as generally better hardware, such as monitor resolution. 

The ability to produce wider pages (again like the Sudoku example, as well as pages like the Lottery Results Gadget Guide) is liberating for web designers, and better for users.

In fact, although I refer to web designers in each case as being the beneficiary of user upgrades, it is fact the users that ultimately benefit the most.

Many IE6 users have not upgraded to IE7 yet because they don't see the benefit, don't know it's free and/or easy, or because they simply don't know about it.

Do you fall into that category?  Go into the Tools menu of your web browser and select Windows Update.

Follow the options and prompts to set up the Windows Update, if you are so prompted, and then follow the prompts to upgrade to IE7.

You and I both will be happy you did.

Entry #12

Avoid pipelining in Firefox

Last month I posted some tips to help speed up Firefox.

Strangely, Firefox has been giving me some performance problems lately. It loads all the content of a page and about half of the images. Then it has an extremely long pause, maybe 2 or 3 minutes, before it finally completes loading the rest of the images.

After about an hour of uninstalling, re-installing, and tweaking, I have narrowed it down to the pipelining settings.

If I turn OFF network.http.pipelining the page and all the images load completely, but if I turn it back on, I get the same half-loading behavior.

The issue seems to be that on web sites that do not support pipelining, setting pipelining on prevents all the images from downloading.

Google is an example of a site that does not support pipelining, so if you turn pipelining on and do an image search, only about half the images will show up on the results page.

Lottery Post supports pipelining, but some of the ads, which are coming from another company's ad server, do not. The result is that most of the page loads great, but when it gets to loading the ads everything grinds to a halt.

As a result, I strongly recommend turning pipelining OFF ("false") in the Firefox web browser. Set both network.http.pipelining and network.http.proxy.pipelining to false.

# # #

P.S. After discovering all of this I came across a Mozilla tip which contains a warning about pipelining. So disabling looks to be a very sensible idea!

Entry #11

Nifty Microsoft utility fixed my IntelliPoint 6.2 upgrade woes

IntelliPoint is Microsoft's mouse companion software, which adds a number of handy features and functions to just about their entire mouse product line. 

I have always liked Microsoft's mice the best, so I always try to keep up with the latest version of IntelliPoint.

My current favorite mouse is the Wireless Laser Mouse 8000, which connects via a small bluetooth dongle, is as smooth as glass, and is fantastically accurate.  The only grudge I have is the location of the "forward" button (which is not even set to "forward" by default) — it was moved to the right side of the mouse, rather than staying under my thumb together with the "Back" button.

Microsoft recently upgraded IntelliPoint to version 6.2.  When I went to install it, it would almost finish, and then I'd get a nasty error box, which naturally was completely unhelpful, telling me that the problem could be disk space or "some other problem".  Nice.

I even tried uninstalling the old 6.1 version, but it refused to uninstall.  Arg!

Googling, I found a utility called "Windows Installer CleanUp Utility".  Sometimes weird install bugs are caused by a new versions of Microsoft installer software, so I gave it a try.

I installed and ran the application, the selected the IntelliPoint 6.1 software and clicked "Remove".  A peek at Vista's "Programs and Features" Control Panel applet confirmed that the software was removed from the list of installed programs.  ("Programs and Features" is the same thing as XP's "Add or Remove Programs".)

Then I tried installing 6.2 again, and it worked fine and completed normally.

The nice thing about the Windows Installer CleanUp Utility is that it is a real program that is now on my PC, ready to use if another install gets screwed up.  Thus, I would definitely recommend it to others as an important toolbox item.

Just a word of caution though: it should really only be used after all other options have been exhausted, because it permanently removes software from the list of installed programs.

Here is a link to the utility info and download page:

<Sorry, link is no longer available>

Entry #10
Page 1 of 2