As I mentioned in my previous blog post, in order to try out BugLogHQ in my work environment, I knew I would have to update the code so it could run against Oracle database tables. So I started looking through the code to see how difficult of a task that would be, to see if it was worth the trouble. Changing the code to support another database would mean at least looking at (if not updating) every block of code that performed a SQL operation: if there were a lot of such blocks, it could be a time-consuming task.
Fortunately for me, BugLogHQ was designed to abtract most of the SQL operations behind a DAO layer. Each database table used by BugLogHQ is mapped to a DAO CFC in the components/db folder of the BugLogHQ application, and those CFCs spell out the name and cfqueryparam data type for each table field and inherit getter and setter-type functions from a base DAO class. When it comes time to persist data from instantiated DAO objects into the database, the DAOs are processed by the dbDataProvider.cfc in the components/lib/dao/ folder, and that CFC contains the SQL code for performing all of the CRUD operations.
(I should point out that this is a somewhat simplified description of all of the "moving parts" involved in data persistence in BugLogHQ, but after following the code through the process of adding and removing records I determined that I didn't need to make changes to other objects such as the DAO factory).
The main issue I needed to address was the fact that the current code for inserting a new record took advantage of the autoincrement feature in the other databases supported by BugLogHQ (MySQL, MS SQL, etc.). Oracle doesn't come with an autoincrement field option. Instead, Oracle has what are called sequences, which are essentially autoincrementing numbers that exist separately from the database tables (meaning you could, if you wanted to, pull the next integer from a particular sequence into any table). So in the SQL install script I wrote to create the needed tables in Oracle (mirroring the install scripts for the other database engines present in the BugLogHQ install folder), I added lines to create sequences for each table, with each sequence name being the table name with "_seq" appended to the end of it. I then added code to the _insert() function in the dbDataProvider.cfc to use the sequences for the primary key value when inserting a record:
<cffunction name="_insert" access="private" returntype="any"> <cfargument name="columns" required="true" type="struct"> <cfargument name="_mapTableInfo" type="struct" required="true"> ... <cfset var dbtype = variables.oConfigBean.getDBType()> <cfset var lstFields = structKeyList(arguments.columns)> <cfset var tableName = arguments._mapTableInfo.tableName> <cfset var seqName= tableName & "_seq"> <cfset var pkeyName= arguments._mapTableInfo.PKName> ... <cfif dbtype eq "oracle"> <cftransaction> <cfquery name="qry" datasource="#DSN#" username="#username#" password="#password#"> INSERT INTO #getSafeTableName(tableName)# (#pkeyName#,#lstFields#) VALUES ( #seqName#.nextVal, <cfloop list="#lstFields#" index="col"> <cfqueryparam cfsqltype="#arguments.columns[col].cfsqltype#" value="#arguments.columns[col].value#" null="#arguments.columns[col].isNull#"> <cfif i neq listLen(lstFields)>,</cfif> <cfset i = i + 1> </cfloop> ) </cfquery> <cfquery name="qry" datasource="#DSN#" username="#username#" password="#password#" result="qryInfo"> select #seqName#.currVal as lastId from dual </cfquery> </cftransaction> <cfset newID = qry.lastID> <cfelse> ...
So if the dbtype value (which comes from the config/buglog-config.xml.cfm file) is "oracle", the Oracle insert code block will execute. The first cfquery block will insert the record, using the nextVal() function of the sequence associated with the table to get the next value of the sequence, while the second cfquery block retrieves the current/latest value of the sequence via Oracle's DUAL "table" (a construct designed to perform pretty much any utility operation that returns a single record or value).
The other issue I needed to address was adjusting some of the field data types. Oracle's "varchar2" data type can only hold 4,000 characters of text; to store larger amounts of text, you need to use fields with the CLOB data type. And in order to use cfqueryparam with such fields, you need to set the cfsqltype parameter of cfqueryparam to "cf_sql_clob". So I updated the code in the entryDAO and extensionDAO CFCs to handle that (with the entryDAO.cfc code shown below):
<cfif variables.oDataProvider.getConfig().getDBType() EQ "oracle"> <cfset addColumn("exceptionDetails", "cf_sql_clob")> <cfset addColumn("HTMLReport", "cf_sql_clob")> <cfelse> <cfset addColumn("exceptionDetails", "cf_sql_varchar")> <cfset addColumn("HTMLReport", "cf_sql_varchar")> </cfif>
After that, all that I had left to do was to add Oracle-specific conditional blocks to two other queries (ones that involved using database functions specific to each database platform) in two other files (components/hq/appService.cfc and components/lib/entryFinder.cfc), and I was done.
Had I done all of the changes I just described perfectly on the first try - which admittedly I didn't :) - it would have taken me less time than it did to write this blog post. And that's a mark of tight, well-designed application architecture, where certain key aspects of application behavior are centralized in a manner that lets you add functionality (in this case database engine support) by updating just a few files responsible for handling that particular task.