Brothers In Code

...a serious misallocation of .net resources

Migrate Settings and Preferences for SQL Developer

I'm at the end of a very long day of dealing with a lot of Oracle B.S.  I'll get into that in another post but I hope to save some other poor sap at least some frustration.  I was lucky enough to get a nice new i7 at work this week and started migrating over my stuff.  Most of the time its a pst here or a folder there.  But of course, not Sql Developer.  Sql Developer spreads its config over a myriad of xml files in C:\Users\<user>\AppData\Roaming\SQL Developer\systemX.X.X.X.  (or C:\Documents and Settings\<user>\Application Data\SQL Developer\systemX.X.X.X in XP).  I suppose you could search for all xml files and drop them in their respective folders but I'm not sure it's save and I'm certain it's a total pain in the ass.  I tried myself and started with connections.xml which was easy enough but quickly abandoned this when I realized things like the folders for the connections were IN A SEPARATE CONFIG FILE!!!  Instead do the following:

  • Delete your existing systemY.Y.Y.Y folder from C:\Users\<user>\AppData\Roaming\SQL Developer
  • Copy your old systemX.X.X.X into C:\Users\<user>\AppData\Roaming\SQL Developer
  • Start up Sql Developer

This time Sql Developer should ask you if you'd like migrate your settings.

Bad Stats and the Strange "Cache Buffers Chain Latches" Wait

This week I had a query go south in terms of performance.  I just tuned the thing a couple of months ago and it was running inside of a couple of seconds (it took two shots at a table that had 2.5 million rows).  Now it was running as long as 3 minutes.  Because part of the query is not date limited, it is going to deteriorate over time.  However after confirming that the execution plan was roughly the same as it was before, I went to the DBA who saw a weird wait on “cache buffers chain latches” for a very small, 700 row  table. That didn’t help me and it just sounded like some crazy internal Oracle-ism.

I spent some more time looking for additional index options or changing the specs for the query.  The DBAs did suggest a compressed index since two columns were fairly non-unique which did help slightly.  Another option was to limit the entire query to a date range.  However after a while I thought I was going down the wrong path.  It just didn’t seem reasonable that it got so bad so quickly so I felt like I was getting ahead of myself.

I started playing with different date ranges (part of the query IS date limited), I noticed querying only the most recent data was slow.  Earlier ranges ran as expected (in a second or two) and the execution plan was had a different order to the joins.  Here I realized that new data is loaded in batches and the stats were probably skewed because of it.  Sure enough, the query started running normally after the DBA updated the stats for the schema.

Of course, I could have talked about updating stats in about two sentences so this is not the end of the story.  But the DBA likes to argue almost as much as I do so it got more interesting from here.  The DBA agreed that the stats were a reasonable fix, but he didn’t understand the wait on the “chain latches.”  He didn’t think it should be waiting on anything.  I argued that the change in join order was causing a 2.5M x700 row loop (vs. maybe a 200K x 700 loop) so a wait on the smaller table makes sense considering it’s being scanned at a ratio of 700:1.  I thought if you checked the tool enough you had a 1 in 700 chance in seeing the same type of wait on the other table.

We agreed to disagree on the details but I admittedly didn’t know what this mysterious “latch” thing was, so of course I google’d it and found an article from “orainternals”.  Inside is a lot of Oracle DBA stuff that I probably don’t have access to but toward the bottom of the article it said this:

From these two different techniques, we can find objects common to both steps 1 and 2 and those objects are probable candidates to focus on. We see that ORDER table is common to both techniques. From the plan above, ORDER table is accessed in a tight nested loops join. This will increase buffer access to ORDER table, in turn, resulting in higher latching activity.

SQL tuning: That was easy
From here onwards, solution is straightforward, we need to avoid tight nested loops join.

I looked at the old and new execution plans.  Sure enough, the joins went from nested loops to hash and merge joins…

Unlike a nested loop join which runs thru the small table for EVERY row in the large table, a hash join just takes one pass at the small table during the “build phase” when the hash table is created.  Consequently the small table was getting hit 1 time vs 2.5 million times.

I’m not to the point quite yet where I can comment why the optimizer chose the different join types other than generally blaming it on the stats.  However it was one of very few times I was hit with the mechanics of an execution plan beyond index manipulation.  Before, I wouldn’t have caught the join type as a pointer to the problem.

Oracle's Lack of Read Uncommitted - My Complaint and Getting Around It

The Complaint

Tom Kyte, THE resource of just about everything Oracle, said this about Read Uncommitted:

READ UNCOMMITTED . The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. 

I'm going out on a limb here in disagreeing with someone who is clearly an expert in his field, but I decidedly disagree with Tom.  Read Uncommitted isolation level allows for dirty reads - so the basic goal of read uncommitted is to allow for dirty reads, period.  By saying its goal is to allow for "nonblocking" reads, only covers the concept of lock acquisition.  However, it ignores the ability to see uncommitted data, regardless of what locking or versioning method is being used by the transaction that created the uncommitted data.  In other words transaction 'A' can execute under a serialized isolation level and get all the locks that it wants, but another transaction under read uncommitted should still see the uncommitted rows.

So what's my complaint?  In the days of both Sql Server and Oracle having row versioning based concurrency, there's almost no reason to allow dirty reads in a production system.  However, Oracle's blatant disregard for the ANSI standard here removes an invaluable tool for the developer.

I've always been a fan of the transaction management components in the Microsoft development stack.  From MTS, to COM+, to System.Transactions, it's always been very easy to maintain consistency over complex business operations.  These days I even wrap my unit tests in transactions so I don't need to create cleanup routines to purge data after a test run.  But there is the problem.  When trying to debug a set of complex database operations, by default, the transaction keeps me from seeing the current state of the database from an outside connection. 

For example, consider the following simplified example inside of a single transaction:

OracleCommand cmd1 = new OracleCommand("insert into batch returning batch_id into :batchId")

OracleCommand cmd2 = new OracleCommand("update table2 set batch=:batchId")

OracleCommand cmd3 = new OracleCommand("select * from table2 where batch=:batchId and Quantity!=OldQuantity")

In my case, cmd3 wasn't returning any rows.  What is wrong?  How do I find out?

In SQL Server, all i need is a simple "set transaction isolation level read uncommitted" in my query window.  From there I could put a breakpoint at cmd3 and execute the same select statement in my query window.  Playing around with some variations, I would have discovered that OldQuantity was null so I needed to revise my query slightly.  However, there is no equivalent to this in Oracle.  There's no way to see the modified data outside of the transaction.

The Work-Around

But what you CAN do is look at the data from within the transaction, here's how.

Create a static method that converts a dataset into a string.  Any string is fine, but I choose an HTML method from YordanGeorgiev.  You then need another method that takes a command object, executes it, and sends the resulting dataset to your dataset-to-string method.  Mine looks like this:

      public static string DumpQuery(OracleCommand cmd)
        OracleDataAdapter da = new OracleDataAdapter(cmd);
        DataSet ds = new DataSet();
        return DumpHtmlDs(null, ds);

Finally, if your sql statement is more than just a one liner, it'll be a mess to try to modify it in the debugger.  I instead created "query.txt" file in c:\temp.

Now put a break point at your equivalent "cmd3" and run to that piece of code.

From here you can run whatever statement you want.

In my case first replaced the CommandText with a new command by reading in my query.txt file with the File.ReadAllText method:


Then you can execute the new command and convert it to text:


In my case, choosing html as the return format is nice and convenient since I can use the HTML Visualizer:


You can continue to update your query.txt file and click the refresh buttons in the watch window to repeat this process and view all the dirty data you'd like.  You may want to change the transaction timeout of your outer test transaction to something that gives you plenty of time to play. 

This isn't as efficient as using a good query tool, but it's better than nothing.  As for Oracle, a company that prides itself on all it's knobs and switches to give you exactly what you need, I'm disappointed in their lack of support for dirty reads and hope they consider some sort of feature in the future.


PL/SQL: numeric or value error: character string buffer too small - My love of Oracle Error Messages

I recently ran into this error when trying to run a stored proc from a c# app.  Looking at all of values that I was passing in, as well as the declarations in the stored proc,
nothing was beyond size limit of the given variables.

OracleCommand cmd = new OracleCommand(@"Begin INSERT_CMDData(
    P_JOB_NO => :P_JOB_NO,
    P_CLOCK_TIME => Sysdate,
    P_IP_ADDR => :P_IP_ADDR,
  ); End;".Replace(Environment.NewLine, " "));
      cmd.CommandType = CommandType.Text;
      #region Params
      cmd.Parameters.Add("p_project_no", OracleDbType.Varchar2).Value = projectNumber;
      cmd.Parameters.Add("p_job_no", OracleDbType.Varchar2).Value = jobNumber;
      cmd.Parameters.Add("p_activity_no", OracleDbType.Varchar2).Value = activityNumber;
      //cmd.Parameters.Add("p_clock_time", OracleDbType.Date).Value = clockTime;
      cmd.Parameters.Add("p_trx_type", OracleDbType.Varchar2).Value = timeClockTransactionType(transactionType);
      cmd.Parameters.Add("p_return_success", OracleDbType.Int32, ParameterDirection.Output);
      cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, ParameterDirection.Output);
      using (cmd.Connection = new OracleConnection(AppConfig.General.ConnectionString))

        Response response = new Response();
        response.Success = (Int32)(OracleDecimal)cmd.Parameters["p_return_success"].Value;
        response.DisplayMessage = cmd.Parameters["p_return_string"].Value.ToString();

        return response.DisplayMessage;

The issue turned to be the return variables and not and not the inserted values.  I simply needed to explicitly add the size to returned varchar.

cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, ParameterDirection.Output);

Had to be changed to

cmd.Parameters.Add("p_return_string", OracleDbType.Varchar2, 4000).Direction = ParameterDirection.Output;

The "Could Not Load File Or Assembly" Error Strikes in a New Way

I got bit by this old error in a new way the other day so I thought I should make a quick checklist for myself since I made some mistakes in my troubleshooting process. 

First I would start by reading How the Runtime Locates Assemblies.  However, unless you are depending one of the methods that change the normal process, like including additional "codebases" or publisher policy redirection, then we can widdle this down to a few simple causes:

  • A Referenced Assembly is not in the Application's Path
  • A Referenced Assembly is not in the Global Assembly Cache
  • The Referenced Assembly is a different version than the installed assembly.
  • The Application is 32 bit and the dll is 64 bit

The Old Problems - Local Files and the GAC

Many if not most applications rely on a simple xcopy deployment, meaning that all of the application's files will be bundled together in the same directory.  If you're simply copying or app folder from one place to another and it's not working on the new machine than either you simply missed a dll or you were using a GAC'd assembly on your development machine and didn't realize it.  If you're using a third party component that was installed with a setup program, than there's a good chance that your referenced assembly is in the GAC.  In that case you need to decide if you you want to run the third party install program as part of your deployment or figure out which dlls that you need and copy them to the folder.

GAC or otherwise, your app isn't going to find anything if it references a different version than what you have installed.  There are two options if realigning the deployed version isn't an option.  Either change "Specific Version" to false in the reference properties in Visual Studio or do a policy redirection.  I actually recommend the latter since big companies will include policies for the GAC that point old references to newer versions.  On the flip side, changing specific version to false will let your app load any version including an ancient one which might give you all sorts of strange errors.

 The New Problems - 32 Bit, 64 Bit and the Platform Target Configuration

It's the last cause that is the inspiration for this post since it has now bit me multiple times.  In my case i was referencing Oracle.DataAccess.dll from Oracle's data provider for .net. throws in some additional variables like PATH and "ORACLE_HOME" environment variable dependencies that threw me off correctly debugging the problem.  After double checking that the dll was in the GAC, and then even copying the dll to the app folder in desperation, I still was getting the error in question.  The error also included the phrase "or one of its dependencies" which kept doubling me back to a PATH problem.  In retrospect I really don't ever remember a case where this error was caused by a dependent dll so I'm not sure why I put so much stock in that route.

Finally I got my wits back and loaded a consistent savior, Sysinternals Process Monitor.  I don't know why I'm so slow to use this tool some times.  Maybe it's the 5 minutes you have to spend getting the filters right, but 9 times out of 10 it more or less tells me exactly what is wrong.  Sure enough I saw the program trying to find my dll in GAC_32.  Why is it looking for a 32 bit dll when we are on a 64 bit machine with the 64 bit oracle dumb-ass I thought to myself, knowing that I had seen this before.  Sure enough the task manager showed the infamous *32 next to my process.  I went back to visual studio and looked that the build properties for my exe project - once again Platform Target was set for "x86".  I set it to "Any CPU", recompiled and the stupid world was right again.

ORA-01036: Illegal Variable Name/Number Using

If you've reading this you've probably been to several sites already and are just about ready to kill something. Why Oracle couldn't print the name of the parameter that you attepted to bind is beyond me, but after all this is a company that was very recently recommending "locator varibles." I'll be honest and say there is no magic bullet here. Compared to Sql Server's Profiler, Oracle's statement tracing functionality is not developer friendly and the ODP client side tracing is pretty worthless. I can really only give some tips for things to look for and some brute force debugging tactics.

To be clear, this is with Oracle's data provider (Oracle.DataAccess.Client) and not Microsoft's Oracle provider (System.Data.OracleClient). The Microsoft provider is being deprecated so it is not recommended that you follow some of the older suggestions of simply switching providers.

In short, this error is a catch all for some sort of mismatch between the parameters referenced in the command text, and the parameters added to the command object. To start there's a couple of simple things to look for:

  • Missing Comma - This one annoys the crap out of me. Forget a comma between constant values in an insert statement, and you'll get a nice "missing comma" error. However, forget a comma between bind variables and you'll get the error in question. My guess is that some genius decided to look only for an operator to terminate a bind variable and didn't consider whitespace.
  • The number of parameters match - This one is an important one and an easy one. We just ran into a problem where a misnamed command variable was causing an extra parameter to be added that we werent seeing. I simple cmd.Parameters.Count in the watch window would have quickly told us there were more params in the collection than were in the statement.
  • The order of the parameters match - Despite the warm fuzzy you might get when your parameter names match what is in your statement, those names do nothing by default. The default functionality is to bind the parameters in the order they are added. You can change this by setting OracleCommand.BindByName = true;
  • Use OracleCommand.BindByName=true if you use the same parameter twice. This is a bit of assumption since I've not tested to confirm it, but I assume that because the default functionality is to bind by order, duplicated parameters would also need to be duplicated on the command object if BindByName is left as false.
  • The type of the parameters match - The type parameter needs to match the type in the database. There is a little bit of flexibility with this with types like numerics (excluding overflow errors, OracleDbType.Int32 can be used in place of OracleDbType.Int64), but if you got lazy and defined parameter as OracleDbType.Varchar2 when you've got a Date column you might get this error. Generally I just use the Parameters.Add overload that takes variable name and value and let oracle decide on the type. This however does not work on output parameters - I set both the type and the length (at least for varchars) for those.

If the above eye-ball debugging doesn't work, I suggest that you start trimming down your statement and params until you get a successful execution and then work backwards from there. I know that's not much to go with but if I find something else I'll be sure to post it.

"show errors" For Oracle Scripts

Everything seems to be turned of by default in Oracle.  That includes returning the details of an error in a sql script. 

After finishing a piece of code like a stored proc or trigger, most developers immediately run the script to make sure there are no errors.  Unfortunately this is all you'll see with Oracle:

Warning: Trigger created with compilation errors. 

There is, fortunately, a way to show the detail of those errors.  If you add a "/" to terminate the script and then "show errors;" to the end of you script.  You'll see the following instead:

Warning: Trigger created with compilation errors.


-------- -----------------------------------------------------------------
3/5  PL/SQL: SQL Statement ignored
3/12  PL/SQL: ORA-02289: sequence does not exist

Here's an example:

create or replace trigger PROJECT_BIU
before insert
referencing old as old new as new
for each row

    --create the id
    select ProjectNuber_seq.nextval
    into :new.Project_Number
    from dual;
show errors;

Oracle's Developer Tools - Not So Good (Data Modeler Review)

EDIT: 7/28/2014

I should mention that I'm still using Data Modeler, despite all the shortcomings below.  It's ability to reconcile changes against previous versions of the model, script, or live db, and generate a script is really great.  This could be a great tool,  but there's stilll a lot to be desired.

My current employer primarily uses Oracle as their RDBMS which I have no problem with.  While I can't say I agree that it's a "better" database than Sql Server, I definately see it's strengths and it is certainly a powerful piece of software.  But that unfortunately stops with the database engine.  Oracle's developer tools are on the abysmal side.  They are chalked full of bugs and usability problems. 

Let's take Oracle's Data Modeler.  In typical Oracle fashion it's full of bells, knobs, and whisles.  In addition to the expected relational modeller it also contains a slew of tools aimed at the database geek.  The problem is they're so busy making this the ultimate do-every-thing tool, that they run out of time when it comes down to Usablility:

  • You can't scroll left, right, up, or down beyond the farthest object in that direction.  If you want to add something to the perimeter of your erd, you need to add something and push it over a little at a time, and even that only works to the right or down.  Moving an object left or up runs it into a brick wall.
  • Items in drop down lists like the pk/uk index selection for a constrain or sorted in created order.  Seriously, a database company failing to recognize a decent place for a simple sort?
  • At first I thought there was no such thing as automatic, right angle connectors.  But I right clicked one day on an empty area of the diagram and selected "auto route".  I didn't find such a feature in the preferences.  Worse, I quickly realized this is a worthless feature since it doesn't allow you to fix them afterwards (and it does a pretty lousy job so they do need to be fixed).
  • Doing things like adding columns requires way too many windows and steps:
    • Right click
    • Click Properties
    • Click Columns
    • Click the '+' button
    • Click the Type drop down and select
    • In visio, I just started typing the column name in the pane on the bottom.
  • The properties box is modal-less, which means you can make changes outside of the properties box but you'll need to close it to refresh.
  • The connectors are pretty dumb.  They stay stuck to the edge they were originally attached to even if that's the opposite edge.
  • Inconsistent context menus.  Some things have them, but try to add a table or domain by right clicking?  Nope.  For tables you need you use the designer tool.  For domains you need to go to Tools > Domains Administration
  • In Domains Administration, The Save Button doesn't enable after changing something.  You need to move to something else and wait for the "Do you want to save..." dialog.  After that, then you click save.
  • An additional unique constraint that includes the PK, it is not shown for the PK.  It only shows "P".  Even worse, if the alternate key is composite, the other column looks like it's unique by itself.
  • Mouse wheel only scrolls up and down.  No zoom or horizontal scroll.
  • I just about had a seizure when I saw this color scheme:
  • Very Confusing button sets - On many dialogs there will be a Save, Apply, and Close button.
  • Custom design rules can only be deleted.  They cannot be deactivated.
  • Design Rules is a modal dialog.  You can't leave it open while fix the errors that it lists.
  • Want to rename a table?
    • Table Properties
    • Rename the table
    • Click OK
    • Back into table properties
    • Click Naming Rules
    • Click OK to rename the related constraints
    • Click OK
    • And, no you can't do the rename and the naming rules in the same step - it doesn't rename the primary key constraint until you close out of the properties window.
    • By the way, despite not having an option listed for columns, the naming rules are also going to rename your columns (which was great when my Parent_Object_Id, Child_Object_Id columns were renamed Object_Id and Object_Id1).
  • No feedback - try deleting a column with a foreign key constraint.  Nothing will happen.  You have to delete the FK, click ok, then go back in a delete the column.
  • Save As always defaults to your My Documents directory.  So if you want to save a copy as a different name but under the same directory, you better not forget to change the path
  • Multiple periods in the file name are ignored.  So if you save an erd as erd1.2.dmd it will be saved as erd1 instead.
  • As far as I can tell there is no way to change your subversion server url.  They give you a cute Edit "server" button that opens a text editor for other subversion parameters, but to change my server from a local repository to an online one, I had to rename repositories.xml under C:\Documents and Settings\brandon\Application Data\Oracle SQL Developer Data Modeler\system3.1.0.691\o.jdeveloper.subversion. and start again.
  • Can't associate a DMD file - I've tried opening the program by clicking on a dmd file.  However, associating the dmd file extension to data modeler causes the dmd file to be opened in sql developer instead as an xml file??
  • No indicator that current file has changes.  Asterisk, italics, anything would be nice to let me know that changes have been made but are not saved.
  • CTRL-S doesn't save.  Ever other app on the planet, including Oracle's SQL Developer (so I know they got the memo), uses CTRL-S to save.  I thought I was saving, but of course with the item immediately above this one, I didn't realizing that I was doing CTRL-S just to stretch my index finger and not for any other purpuse.

Selecting A Row With A Minimum Value

I think everybody has had to do this a dozen times: return a row of data having a minimum value.  The first reaction is to do a subquery in the where clause or as a derived table:

select *
from scan s
join (
  select min(scan_date) as scan_date
  from scan
) s_min on s.scan_date=s_min.scan_date
where rownum=1

A more performant solution is to use analytic functions:

select *

from (
  select s.*, row_number() over (order by s.scan_date) rn
  from scan s)
where rn = 1;

However, I thought this was a relatively simple and database agnostic solution:

select s1.*
from scan s1
left join scan s2 on s1.scan_date>s2.scan_date
where s2.scan_date is null
and rownum=1

This will return the row  where no other row in the same table is older - in other words, the oldest row.  This is probably the worst performer in the bunch but might be useful for an already complicated query.

HTTP Requests With Oracle - Part 1

Making a web request from oracle is as easy as:

select utl_http.request('')
from dual;

But if you're making a request from Oracle, chances are you need to do a little work and not just google something so here's a little pl/sql code...

  l_url varchar2(4000) := '';
  l_httpRequest   utl_http.req;
  l_httpResponse  utl_http.resp;
  l_tempResponse clob;
  l_buffer varchar2(32767);
  utl_http.set_detailed_excp_support (true);

  --setup the http connection
  l_httpRequest := UTL_HTTP.begin_request(l_url, 'GET','HTTP/1.1');

  l_httpResponse := utl_http.get_response(l_httpRequest);
  --write the response (in blocks) to a temporary clob
  dbms_lob.createtemporary(l_tempResponse, FALSE);
       UTL_HTTP.read_text(l_httpResponse, l_buffer, 32767);
       dbms_lob.writeappend(l_tempResponse, length(l_buffer), l_buffer);
    WHEN utl_http.end_of_body THEN NULL;

  DBMS_OUTPUT.PUT_LINE('Response: ' || l_tempResponse);


It would be a simple job to add a couple of query string parameters on to the URL.  But it's very likely you'll be sending data to some sort of a web service and then expecting a response.

Here's an example of a POST request.

  l_httpRequest := UTL_HTTP.begin_request(l_serviceUrl, 'POST','HTTP/1.1');
  utl_http.set_header(l_httpRequest, 'Content-Type', 'application/x-www-form-urlencoded');
  --these are likely constant for the application
  l_postData :=
    'x_login='|| 'xxxxxx' 
    || '&x_tran_key='|| 'yyyyyyy'
    || '&x_delim_data='|| 'TRUE'

   utl_http.set_header(l_httpRequest, 'Content-Length', length(l_postData));

  utl_http.write_text(l_httpRequest, l_postData);
  --get a reference to the response
  l_httpResponse := utl_http.get_response(l_httpRequest);

That's it.  In Part 2 I'll go over some of the finer points of SSL requests and the Oracle Wallet.