Brothers In Code

...a serious misallocation of .net resources

WCF Self-Hosted Read Herring - '...but there is no http base address'

I'd guess that most people who have delt with WCF have probably ran into the following error:

Service cannot be started. System.InvalidOperationException: The HttpGetEnabled property of ServiceMetadataBehavior is set to true and the HttpGetUrl property is a relative address, but there is no http base address.  Either supply an http base address or set HttpGetUrl to an absolute address.

The fix is spelled out pretty clearly in the error.  But today I ran into this error, despite having baseAddress set in my config.  Setting HttpGetEnabled to false allowed the service to start, but I had a service from another project that was configured with it set to true and it worked just fine.  After mucking with the config trying to get it to work, I started thinking outside the box and realized I should check to see if the service is actually running properly when it did start.  To my surprise I could not connect to my simple "hello world" service, and a quick "netstat -an" command showed that nothing was listening on the configured port.  That finally pushed me beyond the config and I started looking at other areas.

Finally I took a look at my windows service class (named WCFHostSvc) and noticed this line in my OnStart method:


serviceHost = new System.ServiceModel.ServiceHost(typeof(WCFHostSvc));

I realized I made a dumb mistake and passed in the windows service type rather than the WCF service type.  So the line should have been:


serviceHost = new System.ServiceModel.ServiceHost(typeof(MainWCFService));

Like I said, I made a dumb mistake.  Still, the error handling from the framework could have been a lot better here.  Rather than starting up as if nothing was wrong, why can't the ServiceHost constructor check the class and make sure it has at least one method to call?  Or better yet, check the type against the configured contract in the service configuration?

Migrate Settings and Preferences for SQL Developer

I'm at the end of a very long day of dealing with a lot of Oracle B.S.  I'll get into that in another post but I hope to save some other poor sap at least some frustration.  I was lucky enough to get a nice new i7 at work this week and started migrating over my stuff.  Most of the time its a pst here or a folder there.  But of course, not Sql Developer.  Sql Developer spreads its config over a myriad of xml files in C:\Users\<user>\AppData\Roaming\SQL Developer\systemX.X.X.X.  (or C:\Documents and Settings\<user>\Application Data\SQL Developer\systemX.X.X.X in XP).  I suppose you could search for all xml files and drop them in their respective folders but I'm not sure it's save and I'm certain it's a total pain in the ass.  I tried myself and started with connections.xml which was easy enough but quickly abandoned this when I realized things like the folders for the connections were IN A SEPARATE CONFIG FILE!!!  Instead do the following:

  • Delete your existing systemY.Y.Y.Y folder from C:\Users\<user>\AppData\Roaming\SQL Developer
  • Copy your old systemX.X.X.X into C:\Users\<user>\AppData\Roaming\SQL Developer
  • Start up Sql Developer

This time Sql Developer should ask you if you'd like migrate your settings.

'Send to' Menu Tweaks for Windows 7

Some of the new user access control features drive me nuts.  For example I can't stand it when you open a non-user file in notepad, only to get an access denied message when you save it.  Yes I know I could start notepad as administrator and browse to the file again.  But that kills me every single time.  Years ago I used to add notepad to my 'Send to' menu and decided it was time to resurrect that practice.

Adding Notepad to the 'Send to' Menu

  • Open Windows Explorer and type shell:sendto in the address bar.
  • Click the Windows/Start button and type notepad into the search box.
  • Right-click-drag Notepad to your 'Send to' folder and select "create shortcut here".
  • In the properties of the short cut, click advanced, and check the 'Run as administrator' box.

Now we've got a quick way to edit a file without browsing to a folder twice.  But this got me thinking.  It would be nice not to have to copy and paste the path when I need to work in a folder with a command prompt. 

Adding Cmd.exe to the 'Send to' Menu and Automatically Change the Path

From the notepad example above, it's pretty obvious that the path is sent as the first parameter to the shortcut command.  We can take advantage of that for a new command line shortcut.

  • Click the Windows/Start button and type cmd into the search box.
  • Right-click-drag cmd.exe to your 'Send to' folder and select "create shortcut here".
  • Right-click the shortcut and choose properties.
  • Go to the shortcut tab and change the target to C:\Windows\System32\cmd.exe /k cd
  • Click advanced, and check the 'Run as administrator' box (optional).

Removing the Drives

The last thing I noticed was that my new and improved 'send to' menu was polluted with all of my network drives and removable drives so searched for a way to take them out and found the answer at winhelponline.com.  The answer boils down to:

  • Navigating in regedit to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
  • Add a Dword Value of 1 with the name NoDrivesInSendToMenu
  • Logoff/Logon

The result should save me a key stroke or two :)

 

Bad Stats and the Strange "Cache Buffers Chain Latches" Wait

This week I had a query go south in terms of performance.  I just tuned the thing a couple of months ago and it was running inside of a couple of seconds (it took two shots at a table that had 2.5 million rows).  Now it was running as long as 3 minutes.  Because part of the query is not date limited, it is going to deteriorate over time.  However after confirming that the execution plan was roughly the same as it was before, I went to the DBA who saw a weird wait on “cache buffers chain latches” for a very small, 700 row  table. That didn’t help me and it just sounded like some crazy internal Oracle-ism.

I spent some more time looking for additional index options or changing the specs for the query.  The DBAs did suggest a compressed index since two columns were fairly non-unique which did help slightly.  Another option was to limit the entire query to a date range.  However after a while I thought I was going down the wrong path.  It just didn’t seem reasonable that it got so bad so quickly so I felt like I was getting ahead of myself.

I started playing with different date ranges (part of the query IS date limited), I noticed querying only the most recent data was slow.  Earlier ranges ran as expected (in a second or two) and the execution plan was had a different order to the joins.  Here I realized that new data is loaded in batches and the stats were probably skewed because of it.  Sure enough, the query started running normally after the DBA updated the stats for the schema.

Of course, I could have talked about updating stats in about two sentences so this is not the end of the story.  But the DBA likes to argue almost as much as I do so it got more interesting from here.  The DBA agreed that the stats were a reasonable fix, but he didn’t understand the wait on the “chain latches.”  He didn’t think it should be waiting on anything.  I argued that the change in join order was causing a 2.5M x700 row loop (vs. maybe a 200K x 700 loop) so a wait on the smaller table makes sense considering it’s being scanned at a ratio of 700:1.  I thought if you checked the tool enough you had a 1 in 700 chance in seeing the same type of wait on the other table.

We agreed to disagree on the details but I admittedly didn’t know what this mysterious “latch” thing was, so of course I google’d it and found an article from “orainternals”.  Inside is a lot of Oracle DBA stuff that I probably don’t have access to but toward the bottom of the article it said this:

From these two different techniques, we can find objects common to both steps 1 and 2 and those objects are probable candidates to focus on. We see that ORDER table is common to both techniques. From the plan above, ORDER table is accessed in a tight nested loops join. This will increase buffer access to ORDER table, in turn, resulting in higher latching activity.

SQL tuning: That was easy
From here onwards, solution is straightforward, we need to avoid tight nested loops join.

I looked at the old and new execution plans.  Sure enough, the joins went from nested loops to hash and merge joins…

Unlike a nested loop join which runs thru the small table for EVERY row in the large table, a hash join just takes one pass at the small table during the “build phase” when the hash table is created.  Consequently the small table was getting hit 1 time vs 2.5 million times.

I’m not to the point quite yet where I can comment why the optimizer chose the different join types other than generally blaming it on the stats.  However it was one of very few times I was hit with the mechanics of an execution plan beyond index manipulation.  Before, I wouldn’t have caught the join type as a pointer to the problem.

Oracle's Lack of Read Uncommitted - My Complaint and Getting Around It

The Complaint

Tom Kyte, THE resource of just about everything Oracle, said this about Read Uncommitted:

READ UNCOMMITTED . The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. 

I'm going out on a limb here in disagreeing with someone who is clearly an expert in his field, but I decidedly disagree with Tom.  Read Uncommitted isolation level allows for dirty reads - so the basic goal of read uncommitted is to allow for dirty reads, period.  By saying its goal is to allow for "nonblocking" reads, only covers the concept of lock acquisition.  However, it ignores the ability to see uncommitted data, regardless of what locking or versioning method is being used by the transaction that created the uncommitted data.  In other words transaction 'A' can execute under a serialized isolation level and get all the locks that it wants, but another transaction under read uncommitted should still see the uncommitted rows.

So what's my complaint?  In the days of both Sql Server and Oracle having row versioning based concurrency, there's almost no reason to allow dirty reads in a production system.  However, Oracle's blatant disregard for the ANSI standard here removes an invaluable tool for the developer.

I've always been a fan of the transaction management components in the Microsoft development stack.  From MTS, to COM+, to System.Transactions, it's always been very easy to maintain consistency over complex business operations.  These days I even wrap my unit tests in transactions so I don't need to create cleanup routines to purge data after a test run.  But there is the problem.  When trying to debug a set of complex database operations, by default, the transaction keeps me from seeing the current state of the database from an outside connection. 

For example, consider the following simplified example inside of a single transaction:

OracleCommand cmd1 = new OracleCommand("insert into batch returning batch_id into :batchId")

OracleCommand cmd2 = new OracleCommand("update table2 set batch=:batchId")

OracleCommand cmd3 = new OracleCommand("select * from table2 where batch=:batchId and Quantity!=OldQuantity")

In my case, cmd3 wasn't returning any rows.  What is wrong?  How do I find out?

In SQL Server, all i need is a simple "set transaction isolation level read uncommitted" in my query window.  From there I could put a breakpoint at cmd3 and execute the same select statement in my query window.  Playing around with some variations, I would have discovered that OldQuantity was null so I needed to revise my query slightly.  However, there is no equivalent to this in Oracle.  There's no way to see the modified data outside of the transaction.

The Work-Around

But what you CAN do is look at the data from within the transaction, here's how.

Create a static method that converts a dataset into a string.  Any string is fine, but I choose an HTML method from YordanGeorgiev.  You then need another method that takes a command object, executes it, and sends the resulting dataset to your dataset-to-string method.  Mine looks like this:


      public static string DumpQuery(OracleCommand cmd)
      {
        OracleDataAdapter da = new OracleDataAdapter(cmd);
        DataSet ds = new DataSet();
        da.Fill(ds);
        return DumpHtmlDs(null, ds);
      }

Finally, if your sql statement is more than just a one liner, it'll be a mess to try to modify it in the debugger.  I instead created "query.txt" file in c:\temp.

Now put a break point at your equivalent "cmd3" and run to that piece of code.

From here you can run whatever statement you want.

In my case first replaced the CommandText with a new command by reading in my query.txt file with the File.ReadAllText method:

 

Then you can execute the new command and convert it to text:

 

In my case, choosing html as the return format is nice and convenient since I can use the HTML Visualizer:

 

You can continue to update your query.txt file and click the refresh buttons in the watch window to repeat this process and view all the dirty data you'd like.  You may want to change the transaction timeout of your outer test transaction to something that gives you plenty of time to play. 

This isn't as efficient as using a good query tool, but it's better than nothing.  As for Oracle, a company that prides itself on all it's knobs and switches to give you exactly what you need, I'm disappointed in their lack of support for dirty reads and hope they consider some sort of feature in the future.

B

Grouping TreeView Items in WPF

I had a view model that consisted of a linked list of nodes.  I wanted to group one set of nodes so I decided to try out the grouping functionality of CollectionViewSource.

Here's my XAML:


    <HierarchicalDataTemplate x:Key="ActionNodeTemplate">
      <tool:ActionNavigatorActionNodeUC />
    </HierarchicalDataTemplate>   
    <HierarchicalDataTemplate x:Key="GroupTemplate" ItemsSource="{Binding Items}" ItemTemplate="{StaticResource ActionNodeTemplate}">
      <Label Foreground="Green" Content="{Binding Path=Name}" Name="NodeLabel"/>
    </HierarchicalDataTemplate>
    <HierarchicalDataTemplate DataType="{x:Type an:ActionProjectSummaryNode}" ItemsSource="{Binding SubItemsView.View.Groups}" ItemTemplate="{StaticResource GroupTemplate}">
      <Label Foreground="Red" Content="{Binding Path=DisplayText}" Name="NodeLabel"/>
    </HierarchicalDataTemplate>
    <HierarchicalDataTemplate DataType="{x:Type an:RootActionNavigatorNode}" ItemsSource="{Binding SubItems}">
      <Label Foreground="Blue" Content="{Binding Path=DisplayText}" Name="NodeLabel"/>
    </HierarchicalDataTemplate>

ProjectActionSummaryNode exposed a CollectionViewSource of ActionNode items named SubItemsView.  In that node i grouped by the Data.ActionTypeDescription with:


       SubItemsView.GroupDescriptions.Clear();
       SubItemsView.GroupDescriptions.Add(new PropertyGroupDescription("Data.ActionTypeDescription"));

That was pretty much it.  For nodes of type ActionProjectSummaryNode, ItemsSource was set to SubItemsView.View.Groups.  Groups is a ReadOnlyObervableCollection of CollectionViewGroup objects (CollectionViewGroupInternal actually).  Each one has a Name property that represents the Group (in this case Data.ActionTypeDescription), and a Items Property that contains the original items that were grouped (ActionNodes).

WPF ToolBar Buttons Do Not Cause Loss Of Focus

Consider this little WPF Window:


    <DockPanel>
      <ToolBar DockPanel.Dock="Top" >
        <Button Content="Save" Name="BtnSave" Click="BtnSave_Click" />
      </ToolBar>
      <StackPanel>
        <TextBox Name="IptTextBox" Text="{Binding Content, ElementName=OptLabel}"  />
        <Label Name="OptLabel" Content="Hello" />
        <Button Content="Lose Focus" />
      </StackPanel>               
    </DockPanel>

If you type something into the text box and then click the "Lose Focus" button, the label is updated as expected (since the default UpdateSourceTrigger on the binding is on focus).  However, if you click on the save button in the toolbar, the label is not updated.

According to MSDN, ToolBars, in addition to a couple of other objects, have a different "Logical Focus Scope." To fix this problem, I added the following code to my save button:


      Control focusedElement = FocusManager.GetFocusedElement(this) as Control;
      if (focusedElement != null)
      {
        focusedElement.MoveFocus(new TraversalRequest(FocusNavigationDirection.Next));
        focusedElement.Focus();
      }

PL/SQL: numeric or value error: character string buffer too small - My love of Oracle Error Messages

I recently ran into this error when trying to run a stored proc from a c# app.  Looking at all of values that I was passing in, as well as the declarations in the stored proc,
nothing was beyond size limit of the given variables.


OracleCommand cmd = new OracleCommand(@"Begin INSERT_CMDData(
    P_PROJECT_NO => :P_PROJECT_NO,
    P_JOB_NO => :P_JOB_NO,
    P_ACTIVITY_NO => :P_ACTIVITY_NO,
    P_CLOCK_TIME => Sysdate,
    P_TRX_TYPE => :P_TRX_TYPE,
    P_IP_ADDR => :P_IP_ADDR,
    P_RETURN_STRING => :P_RETURN_STRING
  ); End;".Replace(Environment.NewLine, " "));
      cmd.CommandType = CommandType.Text;
      #region Params
      cmd.Parameters.Add("p_project_no", OracleDbType.Varchar2).Value = projectNumber;
      cmd.Parameters.Add("p_job_no", OracleDbType.Varchar2).Value = jobNumber;
      cmd.Parameters.Add("p_activity_no", OracleDbType.Varchar2).Value = activityNumber;
      //cmd.Parameters.Add("p_clock_time", OracleDbType.Date).Value = clockTime;
      cmd.Parameters.Add("p_trx_type", OracleDbType.Varchar2).Value = timeClockTransactionType(transactionType);
      cmd.Parameters.Add("p_return_success", OracleDbType.Int32, ParameterDirection.Output);
      cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, ParameterDirection.Output);
      #endregion
      using (cmd.Connection = new OracleConnection(AppConfig.General.ConnectionString))
      {
        cmd.Connection.Open();
        cmd.ExecuteNonQuery();

        Response response = new Response();
        response.Success = (Int32)(OracleDecimal)cmd.Parameters["p_return_success"].Value;
        response.DisplayMessage = cmd.Parameters["p_return_string"].Value.ToString();

        return response.DisplayMessage;
      }

The issue turned to be the return variables and not and not the inserted values.  I simply needed to explicitly add the size to returned varchar.


cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, ParameterDirection.Output);

Had to be changed to

cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, 4000).Direction = ParameterDirection.Output;

Environment Variables and Visual Studio - "The command exited with code 255."

I recently picked up a solution that my brother had previously started and I was confounded by an error received during the pre-build events on a particular project.  The script was simple enough

@if $(SkipPlatformVerification) == true (

 echo "Warning Platform Verification Task is disabled"

) else (

 echo "PVT is Enabled"

)

He was checking an environment variable to alert whether or not the platform verification task was enabled.

I started the build and was presented with the error “The command "@if == true (

 echo "Warning Platform Verification Task is disabled"

) else (

 echo "PVT is Enabled"

)"
exited with code 255.

After a quick Google search I found that the very detailed “exited with code 255” can mean a myriad of things and would not point me towards a solution, it just let me know there was a problem with the script.

My first solution was obvious, I did not have the environment variable [SkipPlatformVerification] and it was failing to read a value thus throwing the error.  After adding the environment variable (and rebooting VS) and building again, it did indeed succeed this time.   Looking at the output is what I found interesting.  The statement appeared in the output as below.

@if true == true (

 echo "Warning Platform Verification Task is disabled"

) else (

 echo "PVT is Enabled"

)

"Warning Platform Verification Task is disabled"

Visual Studio apparently pre-evaluates the $(variable) syntax in the same way that javascript handles variables in that by the time the statement is evaluated at run time, the variable has been replaced by the variable value itself.   So in the case of our statement above, when I didn’t have the environment variable created the output looked like this.

@if  == true (…

… which is obviously a syntax error. 

However using the syntax ‘%Variable%’ is not pre-evaluated and looking at the build output shows the following.

@if %SkipPlatformVerification% == true (

 echo "Warning Platform Verification Task is disabled"

) else (

 echo "PVT is Enabled"

)

The simple solution to the problem is to treat everything as a string.  In the case of the instance where the environment variable is missing, the output looked like this and completed successfully.

@if '' == 'true' (

 echo "Warning Platform Verification Task is disabled"

) else (

 echo "PVT is Enabled"

)

While this ended up to ultimately be a simple syntax problem, I wanted to make note of the way Visual Studio handles the different variables for future reference.

The "Could Not Load File Or Assembly" Error Strikes in a New Way

I got bit by this old error in a new way the other day so I thought I should make a quick checklist for myself since I made some mistakes in my troubleshooting process. 

First I would start by reading How the Runtime Locates Assemblies.  However, unless you are depending one of the methods that change the normal process, like including additional "codebases" or publisher policy redirection, then we can widdle this down to a few simple causes:

  • A Referenced Assembly is not in the Application's Path
  • A Referenced Assembly is not in the Global Assembly Cache
  • The Referenced Assembly is a different version than the installed assembly.
  • The Application is 32 bit and the dll is 64 bit

The Old Problems - Local Files and the GAC

Many if not most applications rely on a simple xcopy deployment, meaning that all of the application's files will be bundled together in the same directory.  If you're simply copying or app folder from one place to another and it's not working on the new machine than either you simply missed a dll or you were using a GAC'd assembly on your development machine and didn't realize it.  If you're using a third party component that was installed with a setup program, than there's a good chance that your referenced assembly is in the GAC.  In that case you need to decide if you you want to run the third party install program as part of your deployment or figure out which dlls that you need and copy them to the folder.

GAC or otherwise, your app isn't going to find anything if it references a different version than what you have installed.  There are two options if realigning the deployed version isn't an option.  Either change "Specific Version" to false in the reference properties in Visual Studio or do a policy redirection.  I actually recommend the latter since big companies will include policies for the GAC that point old references to newer versions.  On the flip side, changing specific version to false will let your app load any version including an ancient one which might give you all sorts of strange errors.

 The New Problems - 32 Bit, 64 Bit and the Platform Target Configuration

It's the last cause that is the inspiration for this post since it has now bit me multiple times.  In my case i was referencing Oracle.DataAccess.dll from Oracle's data provider for .net.  ODP.net throws in some additional variables like PATH and "ORACLE_HOME" environment variable dependencies that threw me off correctly debugging the problem.  After double checking that the dll was in the GAC, and then even copying the dll to the app folder in desperation, I still was getting the error in question.  The error also included the phrase "or one of its dependencies" which kept doubling me back to a PATH problem.  In retrospect I really don't ever remember a case where this error was caused by a dependent dll so I'm not sure why I put so much stock in that route.

Finally I got my wits back and loaded a consistent savior, Sysinternals Process Monitor.  I don't know why I'm so slow to use this tool some times.  Maybe it's the 5 minutes you have to spend getting the filters right, but 9 times out of 10 it more or less tells me exactly what is wrong.  Sure enough I saw the program trying to find my dll in GAC_32.  Why is it looking for a 32 bit dll when we are on a 64 bit machine with the 64 bit oracle provider....you dumb-ass I thought to myself, knowing that I had seen this before.  Sure enough the task manager showed the infamous *32 next to my process.  I went back to visual studio and looked that the build properties for my exe project - once again Platform Target was set for "x86".  I set it to "Any CPU", recompiled and the stupid world was right again.