Brothers In Code

...a serious misallocation of .net resources

The Myth of Multitasking

Multitasking kills productivity, makes you dumber, might permanently damage your brain, and like everything that is really bad for you, it’s also highly addictive.  Interested?  Read on.

The Quest for Performance

We all like to think that we give 100% every day.  But what does that mean?  100% of what?  100% of what we are capable of?  What are we capable of?  Don’t we need to ask the circumstances?

What if the lighting hurts your eyes – how much more work do you get done with an ugly cardboard tent over you’re your head? 

What if the office is noisy with interesting conversation?  How much more work would you get done if you put on some headphones?  What should you choose to listen to? 

What if your back hurts from sitting?  How much more work would you get done if you stand instead? What if your feet hurt?  How much would you get done if you sat down? 

What is the temperature of the office?

These variables, and many more like them are things that are studied, measured, and adjusted in order to raise the productive power of the most expensive resource a business has – human capital.  Some of them have reasonably definitive answers.  An ideal office temp is north of 72 degrees, instrumental music or nature sounds might shield you from distraction, and having a stand-capable workstation can increase call center performance by over 50%.  But one particular variable seems to be continually pushed, despite actually hurting performance.  That variable is multitasking.

What Does Multitasking Even Mean?

The idea behind multitasking is that we can work on multiple tasks at the same time, thus completing more work in the same amount of time.  This sounds great as we all want to get more work done.  But is it really true?

There are some cases where it seems true.  We can walk to the cafeteria while talking to our spouse on the phone.  We can listen to the radio while driving our car.  We can vacuum while searching for a solution in our head.

We could call this multitasking, but we need to examine the actors involved.  If we zoom out to a group of people, we can easily say they are capable of multiple tasks at once, right?  But does it really make sense to say a group “multitasks”?  Or is it just a collection of individuals each contributing an individual task?

Looking at it that way helps us realize that maybe we jumped the gun on thinking of an individual as the smallest unit.   The human brain is subdivided into multiple systems.  The areas that manage motor function and habit are separate from those that are dedicated to critical thought.  So it’s not that your brain is multitasking, it is that you have given the different parts their own task.

Knowing this does have its advantages.  We can take a skill that requires a lot of mental energy, practice it, and turn much of it into habit, thus offloading our critical thinking center for another task.  Consider people with brain damage that eliminates their ability to form new memories. They can get back to a new house after a walk, or demonstrate knowledge of a practiced sequence, even when they have no memory of either.  Tony Dungy won the 2007 Super Bowl not with a more robust playbook, but by turning player reactions to signals into habit, thus giving a reaction time advantage.  But this is not true multitasking.  We are instead taking advantage of collective systems.

The Myth of Multitasking

People swear that they can actually multitask, and that it saves time.  Here is the bad news – a Stanford study found that admitted regular multitaskers actually performed worse than those that performed fewer tasks at a time.  It found that multitaskers were more easily distracted by irrelevant information and had poorer memory recognition.

A Better Label – Task Switching

Here is where we should think about what we consider to be multitasking.  We’ve already noted that you only have one part in your brain for critical thinking, so we already can see the flaw in the idea of “simultaneous” work.  What we’re doing instead is jumping from one task to another, to another, and back.  Maybe a more accurate description for attempted multitasking is “task switching.”

This is actually exactly what a computer does.  Contrary to popular belief (prior to the multi-core processor anyway), even your computer doesn’t simultaneously do work.  It rapidly switches amongst hundreds or thousands of tasks to see if they have work to be done and gives them time to do so.

  • It gives time to your keyboard driver to see if you’ve pressed a key.
  • It gives time to your mouse to see if it’s moved.
  • It gives time to the video playing in your browser to see if it needs to draw another frame. 
  • It gives time to Outlook to see if it wants to notify you of the arrival of a new email. 

This “time slicing” is happening thousands or millions of times in a second so it seems simultaneous to you, but it’s happening one at a time.

The Cost of Task Switching

But this switching isn’t free.    The “context” – the related data – for the next task must be loaded.  Just as important, cached information that was speeding up the last task is lost.  And all the while the computer is using resources just to keep track of all these tasks and their needs.

Your mind is the exact same way.  When you are working on a task and something comes along, you must stop what you are doing, clear your head, and then attempt to focus on the new task.  How often do you ask somebody to repeat a question, not because the question didn’t make sense, but because you hadn’t yet cleared your previous thought when they asked the question?  You likely didn’t need it rephrased, you just needed it repeated without the interference of the old item still drawing your attention.

Even Task Switching is Bad

Multitaskers may now feel vindicated and are rejoicing , thinking “Yeah, that’s what I do, I switch back and forth between tasks, but I can do it quickly and efficiently.”  But unfortunately, the Stanford study found the opposite.  When asked to switch identifying even numbers amongst a list of digits and vowels amongst a list of letters, people that favored multitasking still did worse than those that didn’t.



A funny way to think about this is to remember the “Squirrel!!” scenes from the movie, “Up.”  In this case, let’s not think of the squirrel itself as a distraction as it might represent a legitimate task.  Instead, pay attention to how long the dog sits there doing nothing as it tries to refocus on the original task.  And now think about how many times you’ve heard or thought of these phrases:

“Ok, now where were we?”

“What was I working on?”

“I know I’ve forgotten something.”

“I’m sorry, what was the question?”

How many times have you walked up to your desk and had no idea what you were last working on, or if you finished it.  How many times have you walked to a room, completely forgetting why you walked there to begin with, and had to feign a philosophical moment with “Why am I here?”

Lack of Focus Makes for Lost Revenue

But let’s also think about what this means to a company.  If a completed task is billable or has ROI, that benefit to the company might continue paying long after you’ve moved on to something else.  But if you split your time between two tasks, forcing you to take at least twice a long, the ROI you would have received for completing the first task first has been lost.

For example, if I have two year-long projects that will each save $60k/year, I can deliver the first project in full, allowing the business to take advantage of that first $60k/year of savings. This allows the business to enjoy that savings while I’m working on the second project.  But if I choose to work on both of these at the same time, both projects will take longer. I may deliver $120K of savings in my third year, but that was going to happen anyway.  The $60k I could have saved had I finished that other project right away is gone.  Even if we suspend disbelief and declare that multitasking shaves 3 months off of each project, the 3 year total ROI is still only $90k. 

Even Utilization Isn’t a Given

I’m not suggesting that we turn our back on task switching within the space of good time management.  I think we can all demonstrate that sometimes we must wait for something else to be completed and we might as well do something else in the meantime.  But a 100% utilization is rarely optimal.  Let’s add in a three-month delay in the middle of the first project and fill that with 3 months of the second project.  We deliver $45K (9 months) in benefit by the time our second project launches, or $105K after the full three years.  But if we “waste” a month of time that could be spent on development of project #2, and instead clear the blockers of project #1, we launch the first project a month earlier, and the second a month later, making time a wash.  But we also put $5 in the bank a month earlier. 

Of course, it’s fairly obvious that it depends on the return of the two projects relative to each other. But we also had no conversation about what that delay actually was, or how inefficient it is when a project starts to flounder.  We greatly simplify the story of two projects.  Perhaps the biggest flawed assumption of the story is that we assume resources cleanly jump from project to project.  Switching projects for those three months isn’t free.  There’s certainly spin up time on the new project, and forgotten ideas on the old one.  100% utilization being optimal far from a given, and isn’t even likely.

Utilization and Anti-Unity

There's even a possiblity that a drive towards utilization with multitasking creates a disfunctional business.  If multitasking and utilization is pushed as a good thing, we begin to consider the most "efficient" use of our own personal time.   We draw lines with what is our responsibility vs someone else's responsibility.  The consequence is that completing business inititives end up taking a back seat to "staying busy".  Projects constantly wait for the next resource to free up, dragging out delivery dates or dooming a project to spin forever.  On the other hand, the alternative to "stay busy" message can be "focus and get it done."  This can increase the chance of people offering help outside of their job descriptions, creating a culture of "how can I help for the sake of this project."

Multitasking – The Bad Drug of Business


If you aren’t yet convinced at a personal level, let’s drop the bomb that multitasking makes you dumber, stresses us out, and might permanently damage your brain.  Studies have shown that multitasking lowers your IQ to the same degree as sleep deprevation and marijuana use.  Multitasking increases levels of cortisol (the stress hormone).  There is even a correlation between people with multitasking tendencies and low brain density in the area of the brain that controls empathy and emotional control.  While correlation does not mean causation, the hope that more focus at work allows us all to get along better with friends and family should be encouraging.

Multitasking is Habit Forming

As we jump around all day finishing tiny little tasks – answering that email that just came in or answering an instant message question – we get a little taste of satisfaction and a bump of dopamine, the happy-brain chemical.  We start to crave those little bumps and go for more of them.  Seriously, how many people can resist clicking on the “new mail” notification for Outlook?

Focus is a Better Answer

We’ve made a pretty good case on how multitasking might be better seen as “work without focus.”  If it was possible to take a single task and complete it, then that is what you should do.  But in a workplace that’s full of distractions, that’s easier said than done.
So what can we do?  Here are some tips:

  • Disable distractions
    • Disable the “new message” alert in Outlook, and instead allow yourself designated times of day to check your mail
    • Disable the “you might be interested in…” summary when you open a new tab in your browser
    • Use the “tentative” reply on emails for more than just scheduling conflicts
      • Be honest - indicate you are unsure of your role in the meeting
      • Resist double-booking yourself; you truly can’t be in two places at once
    • Use signals to indicate you are currently “heads down” and are trying to focus
      • Funny signs, a red cup, or wearing headphones all can be turned into “don’t bother me if at all possible”
      • Respect your own signals when others miss them – if someone interrupts you, point to your signal or politely ask the interrupter to leave you a message to read later
      • Drown out office noise with headphones and quiet music
    • Establish a priority of communication
      • Send emails when you’re not in a hurry
      • Use instant messenger or pick up the phone for urgent questions
    • Be courteous of others’ focus
      • Keep office noise down to a minimum.  Use the hallways, offices, and meeting rooms spaces for conversations.
    • Use good meeting etiquette
      • Request others to join your meeting only if you are reasonably sure they will participate
      • Use the “optional” invite option
      • Have a clear agenda
      • Communicate to participants what you expect in their participation
      • Don’t be insulted if you are asked for clarification – they are just being honest and keeping the company’s interest at heart
    • Use other means to get an answer
      • Search the web for an answer rather than asking the guy behind you
      • Be willing to be the person in the middle to ask questions and deliver answers in order to limit the scope of who needs to be involved
    • Practice staying focused just for the sake of it
      • Study and practice mindfulness – see how many SECONDS you can go concentrating on your heartbeat or breathing before for you discover a thought has intruded
      • Recognize the intrusion and go back to your concentration point
    • Time-box
      • Designate days or weeks to get a fixed amount of work done, and don’t allow discussion of new work until the next time-box.
      • Consider company-wide organization of these time-boxes so one teams planning doesn’t conflict with another teams time-box

Attempts to multitask drag down your work and your mental health. Breaking the habit can be extremely difficult, depending on the depth of your established habits. Give yourself time and patience to change to a focus-based approach to tasks, both at work and at home. Slowly, you may find that the satisfaction you gain from completing what you set out to do will give your brain a more effective reward, and focus will become as automatic as was responding immediately to distractions.

Thanks to Christy for editing and closing.



Sql Server vs Oracle and How DevOps Finally Chooses the Winner

TLDR; Applying DevOps to database deployments tips the scales in Sql Server's favor with simpler sql scripts and DDL rollback.

The Bias of a Microsoft Developer...

I've flipped back and forth between Sql Server and Oracle my entire career, but as a Microsoft stack developer, I've tended to favor Sql Server.  Oracle, however, was the primary RDBMS at my current company for many years so this Microsoft guy was forced to soften his bias.  As i got past the differences in Oracle, I eventually settled into the idea that it was nearly as good as Sql Server, just different.

.Net Likes Sql Server

The "nearly" stemmed from a few quirks that while forgivable, still made my job harder.  If you're a .net shop, you'll forever be chasing some issue with  In the the days of the unmanaged version, you'll be fighting with oracle homes and mismatched clients.  You'll have to install publisher policy files so you don't have to upgrade your application everytime you make a minor client upgrade.  In the days of the managed version you'll struggle with bitness of the MTS dll.  You'll have collisions with your local config vs the machine config that somebody accidently applied with an oracle batch file.    Once you understand it it's easy enough to work thru and maybe some of it is a decent exercise for testing the way .net works.  But you're always going to have new developers where this is all foreign, yet none of these cycles will be wasted with Sql Server as the Sql Server assemblies are part of the .net framework.  Call it home team advantage.


But then there's the purely Oracle quirks like the lack of a "read uncommited" transaction isolation level, doing updates based on a join (the syntax sucks and doesn't work most of the time since Oracle decided to protect you with "key preserved" updates), or the lack of a date type bind variable.  It's important that these aren't labled as a "second language" problem where the loss is simply in translation from what I know to what I don't.  I don't really see these this way - this is missing functionality that makes me take a longer route.  Nothing huge, but still things that slow me down.

How about two different programming contexts?  Sql Server just has T-Sql where Oracle has both SQL Plus and PL-SQL.  If you do a query and nothing else, it's hard to tell them apart.  But lets say you need to do some preprocessing or execute a different query based on some condition.  In T-Sql it was no problem - it doesn't matter where the select happens - after an "if", inside of a proc, it doesn't matter - it all works exactly the same and the .net setup doesn't change.  But not in Oracle - logic is only available in PL-SQL (anonymous blocks or procs), yet you can't return a recordset from pl-sql without a refcursor.  This makes debugging your sql code harder and puts you the place of making more changes that you wouldn't need to make for Sql Server.


Once you get out of the programming space, you can make the other usual comparisons.  Sql server seems to be cheaper, but I've also heard that nobody pays full price for Oracle so maybe there is some wiggle room.

Sql Server also won in the tooling section.  Profiler is much simpler to TKProf and oracle traces can't see variable values.  The execution plan visualizer in Sql Server Management Studio was also leagues ahead IMO of both the oracle web-based tools and those in Sql Developer.

Definitive Oracle Wins

Oracle does seem to "win" in two places.  One is clustering.  Oracle's RAC is true hot/hot load balancing for the same DB, which Sql Server does not have at this time.  I'm on the fence with the value of this considering that with either product, you need to size resources based on failure.  In other words, is there really a difference between a Sql Server running at 80% capacity and two RAC'd servers running at 40%?  Does it matter when the disk subsystem is the more likely bottle neck anyway.  Still, I'm sure it has it's advantages in some cases.

The other Oracle victory is a mindset victory and not really technological.  Perhaps this is anecdotal, but Oracle installations always seem to be taken more seriously.  Rarely will you find Oracle without a DBA, but it seems to happen all the time with Sql Server.  The consequence is that Oracle is better maintained and managed than its Sql Server equivalent.  I honestly think this stems from Oracle is actually harder to admin.  But in the end, that widens the gap that a poor server admin must cross before absorbing another system and it ends up making him think twice for Oracle.

DevOps Redefines The Battle

So with all this, my previous stance was that while Sql Server was more friendly to development in a number of ways, it wasn't enough to overcome the current advantages of our solid Oracle installation vs the hodgepodge of Sql Servers we had.

But that changed this week.

We are on a big push to create an agile environment and with that, a DevOps mindset in our post-coding procedures.  Our builds are becoming single-click PowerShell scripts and our front-end deployments are being done to a purposefully built environment set via Octopus Deploy.  And this week I started tackling the database side of that.

DevOps Requirements For Databases

Cumulative Deployments

I started with some criteria for database deployments.  First they should be cumulative, at least to a major revision.  The development environment might get 50 deployments, QA might get 5, and production might get one.  While it might seem easier to simply roll everything, what about the case when a development deployment accidently drops a column.  Do you make a second deployment to try to repair that if it's even possible?  Is it worth the risk to run such a thing in production?  I think it's safer just to remove that from next "good" version of the deployment.

Repetitive Application

So with that, the idea of idempotence - repetitive application without change beyond the initial deployment - is also a requirement.  In order to apply roughly the same patch over and over to the same database, the installation process must have some intelligence and/or be written with this in mind.  Sometimes it means writing a merge statement rather than an insert statement.  But other times we need to only conditionally apply the update.

Oracle Falls Behind

SQL Plus Doesn't Do Logic

In T-SQL, this is relatively simple.  A simple “if column exists” is all you need before your “alter table add column” statement.  But the SQLPlus/PLSQL conundrum complicates this problem.  There is no “if” statement in SQLPlus – which is the context in which DDL is executed.  The only way to do this is to convert the statement into a string and execute it with “execute immediate” inside of PLSQL anonymous block or procedure.  This isn’t unsurmountable, but we now have to escape single quotes and we’ve lost syntax highlighting and error checking.

A Word on DB Migration Frameworks

I considered a database migration framework to help here.  I gave Flyway an honest attempt but I ran into too many issues: It only ran the first statement in a whole sql script and it didn’t support Oracle specific statements (it does now, but only in the paid versions).  I still plan on trying Liquibase, but in both cases the bar for ROI is high; this is not THAT big of a problem in my organization – database developers have little problem putting together a script that a DBA generally doesn’t mind executing.  It’s only the occasional problem we are trying to avoid and only a few hours a week that we are trying to save.  If I introduce something with a bit of a learning curve but with marginal improvement or its own set of problems, it’s not going to deliver any value to the business.

No Transactions - The Final Straw

So it is here I just created a simple checkpoint table to track executed changes and a simple PowerShell function to check and set that checkpoint.  It worked well enough but testing illuminated the largest strike against Oracle. 

Oracle, unlike Sql Server does not support DDL within a transaction.  If an active transaction is in progress when DDL is encountered it is automatically committed.  There is no way to group a bunch of DDL changes together.  To get around this I was dropping and recreating the schema in order to run new versions of the script after I fixed bugs.  But dropping an entire schema is not really an option in many of our legacy systems.  They have too much data to restore each time.  But even if that wasn’t the case in dev or QA, what about production?  Isn’t it a huge advantage to wrap something in a transaction and know everything will be as it was if it fails rather than running for a restore?

Liquibase and Flyway both have options to write undo scripts.  But frankly, I can’t see how this helps.  Undo for individual migrations still might leave you with a half applied major revision since it’s only going to undo the last script that failed, and I fail to see how you can structure a major update into a single file (at least in Flyway’s case).  Even if that’s not the case, we’re right back to the SQLPlus/PLSQL logic problem above as your undo is certainly going to need to investigate what it actually needs to undo.

Oracle has a potential fix in the form of “Flashback”.  But at this time you can only do this at the database level and not at the schema level.  We have hundreds of schemas in two Oracle databases – so data loss could be a consequence of this type of restore.


So while once I could have turned a blind eye in favor of slightly better upkeep, DevOps has brought me to a point where I’m reconsidering my stance in this battle.  There is too much running on Oracle here to consider replacement, but I wonder if we should consider growing our Sql Server environment and begin giving it preferential treatment.

I still owe Oracle the chance for some work arounds.  Maybe we can make schema-level backup and restore something that is easy to do from within the deployment, but again this isn’t a translation problem.  Oracle is definitely missing the ability to run DDL in a transaction and there is no simple solution to work around the issues it causes.

The "Platform" and it's Mistaken Identity

When you write software for a business that isn't a software company you'll inevatibly start to get into some conversations about what software is or isn't, should or shouldn't be.

Good software developers in the line-of-business space can take the rules and wants of a businness and turn them into consistent rules and organized containers.  Still, a software program is far from "tangible".  Yes the software exists, but to look at the ideas in that software as "things" is a tough ask.  Sometimes it's easy because we find real-world equivalents like "ShoppingCart" or "Order".  But things like "WorkflowStep" or "SecurityProvider" are harder to picture, and unfortunately failure to do so allows a particular software to be over-generallized.

Generalizations happen a couple of different ways.  Sometimes it's just mistaken identity.  "E-Store" seems simple enough, but once you ask questions like is it B2B or B2C, is the product customizable, can you schedule shipments, is it an auction, can you delegate ordering, are cart items considered reserved, or is your product regulated,  then these items likely drive your final solution well beyond "allowing a user to buy my products on-line."

Sometimes the generalization comes from choosing the "what".  Is a recall something you do within a larger system of cars?  Or is "car" just a type in a system designed around recalls?

It is here that developers start to wish words like "leverage" didn't exist.  When sales can find a generalization that can fit both an existing solution and one they want to sell, all of a sudden they are the same thing.  It is here a "platform" idea arises.

For me a platform is about building blocks.  The .net "platform" is bunch of blocks of various shapes but relatively small sizes.  You can connect them together to make almost any shape but they are smaller so it takes a while to make a larger shape.  But that also means it's really flexible, well supported, and can easily be changed.  There's also other companies that build their own custom pieces (components) or chunks (open source or code snipits).  Other more specific platforms, like CRMs, ERPs, etc, come with massive building blocks.  They drop in large chunks and the shape grows quickly - but the variability between pieces is much smaller than something more general like a programming platform - so adding to your creation requires some conformity, or you must learn some pretty specific rules on how to create your own pieces.

But sometimes a "platform" seems to have become is a singular piece of software that can be configured to meet conceptually similar needs.  While a stretch, this isn't that far from the truth.  But if you try to explain that you "configured" the .net platform by combining different assemblies and compiling them with Visual Studio, you're likely going to get some strange looks.  This just isn't what sales had in mind.  They ment a "platform" that is specific enough have a specific function without being tied to a specific business case.

I'm not saying this isn't possible.  When you look at some CRMs or financial tools, they do allow a lot of business specific customization without starting with a blank solution in Visual Studio.  But to me, there is no such argument as "build vs buy".  There is only shades of grey in terms of how much you can buy and how much you can build.  Do you buy a estore and then customize it, or do you back up a little bit and buy a shopping cart component and pay for a payment processor?  Each step up gets you more canned functionality, but it also gets you farther away from resources and support.  You can find a developer, component options, and articles on how to use the smaller pieces in minutes.  But finding somebody with experience with yahoo's store designer is a whole different story.

And it's not really a matter of cost - I beleive the build vs buy lines eventually cross.  Build might require more investment upfront, but customization becomes easier with each component that you own.  Need to add a new concept to your data?  No problem, add another table to your database, add some relationships, and refactor.  Buy on the other hand, can often can get you a lot of functionality upfront, but there is no option to refactor in something that you don't own.  Any customizations have to be done within extensions that have been offered based on the 80/20 rule - if 80% of their customers also want it, than it might be well supported.  But anything else and you're doing a work-around that is made complicated by the simple fact that it IS a work-around with somebody else's rules and isn't a direct solution.

I've been looking for an anology that explains this for years, and through a link my boss sent me, I might have found one.  Consider vehicle "platform"..

Taking a vehicle platform to a drivable car is “custom” work – shaping the vehicle, organizing and styling the interior, etc.  But it is there I think businesses make the mistake…they mistakenly boil the “needs” down too far.  The need of a car for each individual person goes WAY beyond driving from point A, to point B.  It’s about do you have kids so you want sliding doors so they don’t smash them into things or a split middle row so they can get in the back seat on their own.  Is it a bigger guy so bucket front seats are out for him, or do you need AWD/4WD for winter driving? 

These are COMPONENTS to build AND shape final solution - many of them actually dictate what the vehicle is in the end (I‘ve not yet seen a sedan with a sliding door option :)).  Even things like climate control and lighting seem ubiquitous but they require ducts and electronics specific to that vehicle.    It is rare for a company to sell a premium version as a base model only to get those premium features with a simple configuration    You can’t get under a 2WD vehicle and see a 4WD transfer case that is just “disabled”.   “Here sir, let me enable SS mode so you can see what another $10k feels like” :).  Even if we could do this without making the customer feel dupped, it still is not free.  Think about how much more a 4wd recall would be if you even had to apply it to 2wd cars.  In the software world its the equivalent to having to consider unused functionality for fixes.

To see how this still holds up, take a possible counter argument: GM is trying to reduce the number of platforms from 26 to four – but even they are giving themselves 8 years out to do that and it took them 100 years to get this far :).  And think about why they might be able to reduce the number of “platforms”.  I don’t think it’s because they can move more features into the platform and turn them on and off depending on the application.   I think it’s because the development practices have advanced and components are cheaper to where more of the “custom” vehicle can take up the slack that was once delegated to 26 different variations - the case of innovating the factory over the product.  The only other option is to drastically reduce their offering which they can do…but in that case I might go buy a Ford instead.

I'm not saying a platform can't exist.  If you do something long enough you start to see some really good abstractions and can eliminate some boilerplate code that's found in every solution.  But abstractions are not software - they are not concrete.  They are the BASE for something more complicated.  And so when someone asks you if a previous solution can be "levereged" you can say yes - the EXPERIECE can absolutely be leveraged.  But that isn't the same thing as saying, "it's already done".

Paying Your Competitor's Training Costs - Following Instead of Leading

Have you ever advocated for something new or different only to have the door slammed with something like “that’s not industry standard” or “what we already do is best practice”?   What does this even mean? If standards and practices cannot be challenged how do they ever change?

I have a quote hanging on my wall:

Do not seek to follow in the footsteps of the wise. Seek what they sought.

Matsuo Basho

To me that means don’t just follow, but seek the reasons they went in that direction in the first place. This might seem simple but the idea has big implications in business.

Every business is different. Different sizes. Different goals. Different customers. Some businesses are innovators and at the forefront of their industry. And some are just trying to catch up. Maybe for that latter group, following what others do is a way to catch up; a way to pull a little closer to their competition. But at some point doesn’t every business want to lead? How can you lead if you’re doing what everybody else is doing?

Of course the other problem is that are you making an apples to apples to comparison to begin with? “Software” means a lot of different things, but I think you can generalize with the following attributes:

  • Lifespan of the software
  • Size of the team developing the software
  • Size of the software’s user base
  • Distribution makeup of the software

Taking stock of these things can drastically affect the VALUE of your development methodology. The value of things like unit testing or automated builds is MUCH higher when you consider the cost of a defect in something like SQL Server. Compare that to your home grown project tracking system with 10 users that needs a single file updated on a single server within your organization? The value just isn’t the same.

Then there’s the question of whether something is “standard” to begin with. There are multiple ways to skin a cat, and sometimes a company has to choose a way that makes sense in the most generic fashion. Take Microsoft’s layout of an MVC project in Visual Studio. There are three folders: controllers, models, and views. In a large project where the work might be horizontally separated over different developers, that might make some sense. But frankly, grouping these things first by project domain and THEN by the logical architecture makes a lot more sense to me. If I’m developing a project vertically thru all the layers, I’d much rather have the Customer views, view models, and controllers in one “Customer” folder to save me the disjoint development experience of jumping around. Is this wrong? I don’t’ think so. If you’re so inclined you can even assign namespaces to these different classes as if they were still had the old layout. The logical architecture of the compiled solution would be identical. The only thing I did was change the physical layout of my code files. This is NOT something that is “standard”.

Now, am I advocating against “industry standards”? Of course not. Take on as many of them as you can. When something is industry standard it can also mean better resources and better support. But the key is to use those standards because they solve a problem that you actually have. When you don’t actually have those problems, your employees are solving problems that their NEXT employer needs solved. Do you want to pay for training for your competitor’s employees?  



Pin a Second IE Instance for Development

This is sorta dumb.  I wanted a second pin for IE, one for regular use, and one for something I'm developing.  This seemed simple - make a local file and just pin it.  Of course, it doesn't work - the link just gets pinned to IE.

So in desparation I just made this dummy page.  I could have made a pinned site from just about anything, but I kinda liked the idea of using a custom icon

I can't be the only one that hates surfing thru a list of 20 reference links to find the browser instance that's displaying my current project right?



Synchronizing Config Files with Post-Build Events

If you have a test project, you'll get tired of keeping your configs in sync.  Depending how you're project is setup, sometimes you can refer some sections to a centralized config.  However most of the time I prefer to just use a post build event to copy the file from one project to another.

copy $(SolutionDir)MyWebProject\Web-ConnectionStrings.config $(ProjectDir)MyProject.Tests.ConnectionStrings.config
echo ^<!--copied via post-build event in $(ProjectName)--^> >>$(ProjectDir)MyProject.Tests.ConnectionStrings.config

The first line copies the file.  But I wanted myself or other developers to know that the file is being overwritten, so I added the second line which appends a message to the end of the file.

Matching symbols could not be found - Performance Profiler

I was trying to profile a dll by launching it into another process, but when the profiler report was done it didn't have any references to my code - as if the dll was never instrumented.  I finally started looking at the output window where I noticed something odd - the profiler was instrumenting the dll in obj\debug instead of bin\debug.

When using the profiler wizard, it promps you to chose which projects you want instrumented.  In doing so, it looks like it decides to use the obj\bin output which of course the external exe never sees.  The fix seemed to be to explicity add the dll instead of adding the project:



Goodbye Nunit-agent.exe

Once I started using Visual Studio 2010, I could no longer attach to nunit to debug my tests.  Instead I found that I had to attach to nunit-agent.exe instead.  Up till now this was a non-issue.  However, some performance profilling methods require a start-up app, and nunit starting it's own process was confusing it so I was forced to find the reason for this new process.

It turns out it's because nunit by default is set to run under the 2.0 framework and it needs to run a separate process in order to test 4.0 assemblies.  The good news is that you can force nunit to run under 4.0 with this change to your nunit.exe.config:

<requiredRuntime version=”v4.0.30319″ />
You can verify the before and after effect by looking at Tools -> Test Assemblies in nunit.

Querying IIS Logs with Excel and MS Query

I needed to see which services in a wcf project were taking an unfair share of time.  The most accessible method is to analyze the iis logs.  However, I didn’t want to spend all day figuring out how to use some of the analytical features in Excel so I opted to use SQL instead.

Converting the File

The logs are in a space delimited text format by default so you’ll need to convert it to an Excel file instead.
Open excel, and then chose File.Open and browse to your file.   You’ll need to change the filter to “all files”

In this case the IIS Logs are space delimited…


For the most part “general” data type is fine, but you may need to return here and change this if you run into a data type error in your query.


Save the file. Select No when prompted to save in the current format and select xlsx instead.


Querying the Excel File

Create a blank workbook.  On the data tab, select From Other Sources. From Microsoft Query.


Choose Excel Files as the data source and then select the Excel file you just created.


You’ll be prompted to choose columns.  You can use the picker or just click cancel to go right to the designer (say yes at the prompt to continue editing).


Add the table(s) (sheets) from your Excel file that you’d like to query.



From there you’ll be in the designer.  You may want to just double click on the ‘*’ in the column selector and click the “SQL” button to get you started.  After that you can execute whatever SQL you like.  For example, group by request, the top running pages:

SELECT t.`cs-uri-stem`, Count(*), Sum(t.`time-taken`) AS 'totaltime', Min(t.`time-taken`) AS
'mintime', Max(t.`time-taken`) AS 'maxtime', Avg(t.`time-taken`) AS 'avgtime'
FROM `C:\Temp\u_ex130513.xlsx`.`u_ex130513$` t
GROUP BY t.`cs-uri-stem`
ORDER BY Sum(t.`time-taken`) DESC


Click ok in your query and then close query editor.  You’ll then pop back into excel and will be prompted for a starting cell to place your query data.


Done. If you know SQL better than you know the functions in Excel, this is a much easier way to analyze a spreadsheet.




Parallel.For - Multithreading Made Easy

I was using nunit to make sure a checkbits/signing algorithm was doing what it was supposed to do. Like everything in cryptography it was slow. The following loop took more than 16 seconds:

      for (int i = 0; i < 10; i++)
        var value = (UInt64)rng.Next((Int32)cipher.MaximumSequence);
        ulong? lastHackedValue = null;
        //for all the checkbit combinations...
        for (int j = 1; j < Math.Pow(2, cipher.CheckBits) - 1; j++)
          //move the checkbits over to the correct spot
          var signature = (UInt64)j << cipher.SequenceBits;
          var hackedValue = value + signature;
            if (lastHackedValue == null)
              lastHackedValue = hackedValue;
              Assert.Fail("Too many successful cracks for value {0}.  Hacked Value 1:{1}, Hacked Value 2:{2}", 
                i, lastHackedValue, hackedValue);
          catch(ApplicationException ex)
            Assert.That(ex.Message == "Invalid signature");

I tried using the Parallel class – sending my cpu utilization from 12 to 100% and I got it down to 5 seconds:

        var parallelresult = Parallel.For(0, 10, i =>
    //...same for loo

I noticed that assertion failures were not being displayed in nunit correctly.  It turned out the .For call was wrapping them in an AggregateException so I wrapped the code like this:

        var parallelresult = Parallel.For(0, 10, i =>
      catch(AggregateException ex)
        if(ex.InnerException is AssertionException)
          throw ex.InnerException;
          throw ex;