Brothers In Code

...a serious misallocation of .net resources

Paying Your Competitor's Training Costs - Following Instead of Leading

Have you ever advocated for something new or different only to have the door slammed with something like “that’s not industry standard” or “what we already do is best practice”?   What does this even mean? If standards and practices cannot be challenged how do they ever change?

I have a quote hanging on my wall:

Do not seek to follow in the footsteps of the wise. Seek what they sought.

Matsuo Basho

To me that means don’t just follow, but seek the reasons they went in that direction in the first place. This might seem simple but the idea has big implications in business.

Every business is different. Different sizes. Different goals. Different customers. Some businesses are innovators and at the forefront of their industry. And some are just trying to catch up. Maybe for that latter group, following what others do is a way to catch up; a way to pull a little closer to their competition. But at some point doesn’t every business want to lead? How can you lead if you’re doing what everybody else is doing?

Of course the other problem is that are you making an apples to apples to comparison to begin with? “Software” means a lot of different things, but I think you can generalize with the following attributes:

  • Lifespan of the software
  • Size of the team developing the software
  • Size of the software’s user base
  • Distribution makeup of the software

Taking stock of these things can drastically affect the VALUE of your development methodology. The value of things like unit testing or automated builds is MUCH higher when you consider the cost of a defect in something like SQL Server. Compare that to your home grown project tracking system with 10 users that needs a single file updated on a single server within your organization? The value just isn’t the same.

Then there’s the question of whether something is “standard” to begin with. There are multiple ways to skin a cat, and sometimes a company has to choose a way that makes sense in the most generic fashion. Take Microsoft’s layout of an MVC project in Visual Studio. There are three folders: controllers, models, and views. In a large project where the work might be horizontally separated over different developers, that might make some sense. But frankly, grouping these things first by project domain and THEN by the logical architecture makes a lot more sense to me. If I’m developing a project vertically thru all the layers, I’d much rather have the Customer views, view models, and controllers in one “Customer” folder to save me the disjoint development experience of jumping around. Is this wrong? I don’t’ think so. If you’re so inclined you can even assign namespaces to these different classes as if they were still had the old layout. The logical architecture of the compiled solution would be identical. The only thing I did was change the physical layout of my code files. This is NOT something that is “standard”.

Now, am I advocating against “industry standards”? Of course not. Take on as many of them as you can. When something is industry standard it can also mean better resources and better support. But the key is to use those standards because they solve a problem that you actually have. When you don’t actually have those problems, your employees are solving problems that their NEXT employer needs solved. Do you want to pay for training for your competitor’s employees?  



Pin a Second IE Instance for Development

This is sorta dumb.  I wanted a second pin for IE, one for regular use, and one for something I'm developing.  This seemed simple - make a local file and just pin it.  Of course, it doesn't work - the link just gets pinned to IE.

So in desparation I just made this dummy page.  I could have made a pinned site from just about anything, but I kinda liked the idea of using a custom icon

I can't be the only one that hates surfing thru a list of 20 reference links to find the browser instance that's displaying my current project right?



Synchronizing Config Files with Post-Build Events

If you have a test project, you'll get tired of keeping your configs in sync.  Depending how you're project is setup, sometimes you can refer some sections to a centralized config.  However most of the time I prefer to just use a post build event to copy the file from one project to another.

copy $(SolutionDir)MyWebProject\Web-ConnectionStrings.config $(ProjectDir)MyProject.Tests.ConnectionStrings.config
echo ^<!--copied via post-build event in $(ProjectName)--^> >>$(ProjectDir)MyProject.Tests.ConnectionStrings.config

The first line copies the file.  But I wanted myself or other developers to know that the file is being overwritten, so I added the second line which appends a message to the end of the file.

Matching symbols could not be found - Performance Profiler

I was trying to profile a dll by launching it into another process, but when the profiler report was done it didn't have any references to my code - as if the dll was never instrumented.  I finally started looking at the output window where I noticed something odd - the profiler was instrumenting the dll in obj\debug instead of bin\debug.

When using the profiler wizard, it promps you to chose which projects you want instrumented.  In doing so, it looks like it decides to use the obj\bin output which of course the external exe never sees.  The fix seemed to be to explicity add the dll instead of adding the project:



Goodbye Nunit-agent.exe

Once I started using Visual Studio 2010, I could no longer attach to nunit to debug my tests.  Instead I found that I had to attach to nunit-agent.exe instead.  Up till now this was a non-issue.  However, some performance profilling methods require a start-up app, and nunit starting it's own process was confusing it so I was forced to find the reason for this new process.

It turns out it's because nunit by default is set to run under the 2.0 framework and it needs to run a separate process in order to test 4.0 assemblies.  The good news is that you can force nunit to run under 4.0 with this change to your nunit.exe.config:

<requiredRuntime version=”v4.0.30319″ />
You can verify the before and after effect by looking at Tools -> Test Assemblies in nunit.

Querying IIS Logs with Excel and MS Query

I needed to see which services in a wcf project were taking an unfair share of time.  The most accessible method is to analyze the iis logs.  However, I didn’t want to spend all day figuring out how to use some of the analytical features in Excel so I opted to use SQL instead.

Converting the File

The logs are in a space delimited text format by default so you’ll need to convert it to an Excel file instead.
Open excel, and then chose File.Open and browse to your file.   You’ll need to change the filter to “all files”

In this case the IIS Logs are space delimited…


For the most part “general” data type is fine, but you may need to return here and change this if you run into a data type error in your query.


Save the file. Select No when prompted to save in the current format and select xlsx instead.


Querying the Excel File

Create a blank workbook.  On the data tab, select From Other Sources. From Microsoft Query.


Choose Excel Files as the data source and then select the Excel file you just created.


You’ll be prompted to choose columns.  You can use the picker or just click cancel to go right to the designer (say yes at the prompt to continue editing).


Add the table(s) (sheets) from your Excel file that you’d like to query.



From there you’ll be in the designer.  You may want to just double click on the ‘*’ in the column selector and click the “SQL” button to get you started.  After that you can execute whatever SQL you like.  For example, group by request, the top running pages:

SELECT t.`cs-uri-stem`, Count(*), Sum(t.`time-taken`) AS 'totaltime', Min(t.`time-taken`) AS
'mintime', Max(t.`time-taken`) AS 'maxtime', Avg(t.`time-taken`) AS 'avgtime'
FROM `C:\Temp\u_ex130513.xlsx`.`u_ex130513$` t
GROUP BY t.`cs-uri-stem`
ORDER BY Sum(t.`time-taken`) DESC


Click ok in your query and then close query editor.  You’ll then pop back into excel and will be prompted for a starting cell to place your query data.


Done. If you know SQL better than you know the functions in Excel, this is a much easier way to analyze a spreadsheet.




Parallel.For - Multithreading Made Easy

I was using nunit to make sure a checkbits/signing algorithm was doing what it was supposed to do. Like everything in cryptography it was slow. The following loop took more than 16 seconds:

      for (int i = 0; i < 10; i++)
        var value = (UInt64)rng.Next((Int32)cipher.MaximumSequence);
        ulong? lastHackedValue = null;
        //for all the checkbit combinations...
        for (int j = 1; j < Math.Pow(2, cipher.CheckBits) - 1; j++)
          //move the checkbits over to the correct spot
          var signature = (UInt64)j << cipher.SequenceBits;
          var hackedValue = value + signature;
            if (lastHackedValue == null)
              lastHackedValue = hackedValue;
              Assert.Fail("Too many successful cracks for value {0}.  Hacked Value 1:{1}, Hacked Value 2:{2}", 
                i, lastHackedValue, hackedValue);
          catch(ApplicationException ex)
            Assert.That(ex.Message == "Invalid signature");

I tried using the Parallel class – sending my cpu utilization from 12 to 100% and I got it down to 5 seconds:

        var parallelresult = Parallel.For(0, 10, i =>
    //...same for loo

I noticed that assertion failures were not being displayed in nunit correctly.  It turned out the .For call was wrapping them in an AggregateException so I wrapped the code like this:

        var parallelresult = Parallel.For(0, 10, i =>
      catch(AggregateException ex)
        if(ex.InnerException is AssertionException)
          throw ex.InnerException;
          throw ex;

WCF Self-Hosted Read Herring - '...but there is no http base address'

I'd guess that most people who have delt with WCF have probably ran into the following error:

Service cannot be started. System.InvalidOperationException: The HttpGetEnabled property of ServiceMetadataBehavior is set to true and the HttpGetUrl property is a relative address, but there is no http base address.  Either supply an http base address or set HttpGetUrl to an absolute address.

The fix is spelled out pretty clearly in the error.  But today I ran into this error, despite having baseAddress set in my config.  Setting HttpGetEnabled to false allowed the service to start, but I had a service from another project that was configured with it set to true and it worked just fine.  After mucking with the config trying to get it to work, I started thinking outside the box and realized I should check to see if the service is actually running properly when it did start.  To my surprise I could not connect to my simple "hello world" service, and a quick "netstat -an" command showed that nothing was listening on the configured port.  That finally pushed me beyond the config and I started looking at other areas.

Finally I took a look at my windows service class (named WCFHostSvc) and noticed this line in my OnStart method:

serviceHost = new System.ServiceModel.ServiceHost(typeof(WCFHostSvc));

I realized I made a dumb mistake and passed in the windows service type rather than the WCF service type.  So the line should have been:

serviceHost = new System.ServiceModel.ServiceHost(typeof(MainWCFService));

Like I said, I made a dumb mistake.  Still, the error handling from the framework could have been a lot better here.  Rather than starting up as if nothing was wrong, why can't the ServiceHost constructor check the class and make sure it has at least one method to call?  Or better yet, check the type against the configured contract in the service configuration?

Migrate Settings and Preferences for SQL Developer

I'm at the end of a very long day of dealing with a lot of Oracle B.S.  I'll get into that in another post but I hope to save some other poor sap at least some frustration.  I was lucky enough to get a nice new i7 at work this week and started migrating over my stuff.  Most of the time its a pst here or a folder there.  But of course, not Sql Developer.  Sql Developer spreads its config over a myriad of xml files in C:\Users\<user>\AppData\Roaming\SQL Developer\systemX.X.X.X.  (or C:\Documents and Settings\<user>\Application Data\SQL Developer\systemX.X.X.X in XP).  I suppose you could search for all xml files and drop them in their respective folders but I'm not sure it's save and I'm certain it's a total pain in the ass.  I tried myself and started with connections.xml which was easy enough but quickly abandoned this when I realized things like the folders for the connections were IN A SEPARATE CONFIG FILE!!!  Instead do the following:

  • Delete your existing systemY.Y.Y.Y folder from C:\Users\<user>\AppData\Roaming\SQL Developer
  • Copy your old systemX.X.X.X into C:\Users\<user>\AppData\Roaming\SQL Developer
  • Start up Sql Developer

This time Sql Developer should ask you if you'd like migrate your settings.

'Send to' Menu Tweaks for Windows 7

Some of the new user access control features drive me nuts.  For example I can't stand it when you open a non-user file in notepad, only to get an access denied message when you save it.  Yes I know I could start notepad as administrator and browse to the file again.  But that kills me every single time.  Years ago I used to add notepad to my 'Send to' menu and decided it was time to resurrect that practice.

Adding Notepad to the 'Send to' Menu

  • Open Windows Explorer and type shell:sendto in the address bar.
  • Click the Windows/Start button and type notepad into the search box.
  • Right-click-drag Notepad to your 'Send to' folder and select "create shortcut here".
  • In the properties of the short cut, click advanced, and check the 'Run as administrator' box.

Now we've got a quick way to edit a file without browsing to a folder twice.  But this got me thinking.  It would be nice not to have to copy and paste the path when I need to work in a folder with a command prompt. 

Adding Cmd.exe to the 'Send to' Menu and Automatically Change the Path

From the notepad example above, it's pretty obvious that the path is sent as the first parameter to the shortcut command.  We can take advantage of that for a new command line shortcut.

  • Click the Windows/Start button and type cmd into the search box.
  • Right-click-drag cmd.exe to your 'Send to' folder and select "create shortcut here".
  • Right-click the shortcut and choose properties.
  • Go to the shortcut tab and change the target to C:\Windows\System32\cmd.exe /k cd
  • Click advanced, and check the 'Run as administrator' box (optional).

Removing the Drives

The last thing I noticed was that my new and improved 'send to' menu was polluted with all of my network drives and removable drives so searched for a way to take them out and found the answer at  The answer boils down to:

  • Navigating in regedit to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
  • Add a Dword Value of 1 with the name NoDrivesInSendToMenu
  • Logoff/Logon

The result should save me a key stroke or two :)