Showing posts with label Dynamic SQL. Show all posts
Showing posts with label Dynamic SQL. Show all posts

Tuesday, March 12, 2013

Non-deplorable use of triggers

So I had a rare case today to actually use a trigger without invoking the wrath of all the DBAs, developers and gnomes that live under my desk.

For starters, let me give my little myopic view of why I (and most SQL developers I know) avoid triggers like The Plague. Up front I admit that I probably don't know all there is to know about triggers, and how best to implement them to keep from pulling your hair out. That said, the existing structures with triggers have one of the two following problems.

Triggers, when used to enforce data integrity involve restricting data in at least one way. There's an older system I have to work on occasionally at work which has a procedure which updates identifiers in about 20 tables; each table containing triggers referencing other tables (some, the same ones which are in the procedure, but some are not). When the procedure fails, tracing down where the error took place in the procedure is just the beginning. You then end up traversing a network of obscure DML enforcing statements across dozens of objects. The end result is that most people who work on the system take extraneous effort to circumvent the triggers if something goes wrong with them rather than dig through them to fix the problem.

The next problem with triggers is that regardless of the use, there is a "hidden" execution cost on every transaction on the table to which the trigger is bound. Imagine you have a table which when built had very low traffic; maybe 50 items added a day. Each time one of those is logged, a trigger fires to update an audit logging table, and additionally update a reference table to make sure that any identifiers which came in are historically corrected as well. Now imagine a few years go by and the developer who wrote the system has retired to become a pet psychiatrist. As years go by (perhaps due to this developer leaving the company) the company grows by leaps and bounds, and now that table is receiving 500,000 DML transactions a day, or 5 million. While there are certainly ways to remedy this situation, it might take a long time to try to realize that there is a trigger on the table.

So again, maybe this is just the way I've grown accustom to doing things, but integrity enforced by table constraints or through procedural logic are the way I prefer to maintain data integrity.

That said, here's the situation I had today. A client was trying to upload some new data to one of our test environments, and throughout the day the data would (according to the client) appear, and later disappear. I'll spare you the hour long heated conversation we had between several teams, but in the end, I undertook the task of monitoring the tables which housed the data to see what they did throughout the day. Initially I set up an SSIS package which would periodically throughout the day dump the data into a logging table via an Agent job. But on my bus ride home, the annoying fact that this still would not necessarily catch the culprit statements persisted. Thinking back on a presentation I'd given on logging tables (Change Tracking, Change Data Capture, etc.) suddenly it occurred to me that a trigger would be a perfect solution to this*.

I set up an AFTER INSERT, UPDATE, DELETE trigger on the two tables with the identifiers I was concerned with and had them dump any DML statements into a logging table. The table would auto increment to prevent any PK violations. The trigger additionally filtered the INSERTED and DELETED tables by the 4 identifiers the client said were popping in and out, and I set up another Agent job to not let the table grow larger than one month of history. Additionally I added a clause in the trigger compilation to only instantiate in development and test environments. There's certainly no reason it could not run in production as well, but the idea I wanted here was maximum visibility into the table's modifications with a minimal footprint on the database. So with small non-intrusive trigger, I was able to log the actions on the table and identify when the records popped in and out of existence.

There are still a few drawbacks to this approach. First of all, maintaining the list of tracked tickers is a very manual process. While this is a rare situation that I'll probably only have to monitor for a few weeks, if this happened again, i'd almost have to re-build the trigger from scratch. Second, ideally I would have the trigger "retire itself" after say a month so if I forgot about it, when I moved on to become a pet psychiatrist the trigger wouldn't get lost in the tangle of 0s and 1s. Also, and this is not really a drawback of a trigger, but rather a limitation, I would have liked if there was a way to pass the calling procedure's information into the trigger in order to further trace the source (something like object_name(@@PROCID), but unfortunately the execution context of the @@PROCID changes to that of the trigger upon it's call.)

In the end however, this seemed like one of those times where it was the right tool for the job. It's refreshing to break out of the tunnel vision which often inadvertently affects ones programming styles and find a legitimate use for something you'd historically written off.



* Change Tracking and CDC were not options due to database restrictions we have.

Monday, December 12, 2011

Replace tags in a string

I ran into a situation today where a client wants to be able to give us a string for a disclaimer, but they want to be able to add tags to the string which, after we work our magic, will be replaced with a value from a table, making the string somewhat dynamic in nature. Here's how I went about doing it.

First, I made a function to house the logic around the tags. I realize there are a few shortcomings here and currently, this means that you can't have a '<' in the text without it being the start of a full tag ending in '>', but there are a few adaptations that I feel would be easy enough to implement if this became problematic. The code is a modified version of a comma-delimited string parsing function I use a lot. Basically what this part does is gets the charindex of the opening and closing brackets and then length between the two. It cuts out the substring between them based on that length and pops it into a table. After it does that, it cuts off everything to the left of that closing bracket (or to be more accurate, it keeps the right side). This repeats until you have a table filled with the contents of all the bracketed terms in the string.



IF OBJECT_ID (N'dbo.TagParse') IS NOT NULL
DROP FUNCTION dbo.TagParse
GO

CREATE FUNCTION dbo.TagParse(@String varchar(max))
RETURNS @TagValues TABLE
(
ident int identity(1,1) primary key clustered,
TagValue varchar(100)
)
AS
begin

declare
@FragText varchar(255),
@OpenIndex int,
@CloseIndex int,
@FragLength int,
@StringLength int

--Get some relevant information about the frament length and position
select
@OpenIndex = CHARINDEX('<', @String) ,
@CloseIndex = CHARINDEX('>', @String),
@StringLength = datalength(@String)
select
@FragLength = @CloseIndex - @OpenIndex

while @CloseIndex > 0
begin

select @FragText = SUBSTRING(@String, @OpenIndex + 1, @FragLength -1)
select @String = RIGHT(@String, @StringLength - @CloseIndex)

insert into @TagValues(TagValue)
select @FragText

select
@OpenIndex = CHARINDEX('<', @String) ,
@CloseIndex = CHARINDEX('>', @String),
@StringLength = datalength(@String)
select
@FragLength = @CloseIndex - @OpenIndex

end

return
end


Once we've got the tags parsed out, we can get into getting those values from a presentation table. For the purposes of what I had to do, it will always come from the same table, but I suppose if one were so inclined, you could do some funky dynamic sql to be able to specify a table name too, but I won't go there (right now). I've built up some basic sample data so you can see this in action, but the idea behind it is to:

1) parse out the string using the above function
2) set a variable = the fact name parsed from the string
3) retrieve the value from the presentation table based on the fact name from step 2

Then, much to my dismay, using a loop, I go through the parsed table and replace instances of the tag with the values I just set. If there's a set based way to do this, I'd love to know it, but I couldn't think of a way that didn't involve dynamic sql and lots of nested replace statements. And given that these tagged values are probably not ever going to be more than a few tags here and there, the strain of RBAR is not too bad.


if OBJECT_ID('tempdb.dbo.#FactSet') > 0 drop table #FactSet
create table #FactSet
(
FactName varchar(100),
FactValue varchar(100)
)

insert into #FactSet (FactName, FactValue)
select 'FactName1', 'Test Value 1' union all
select 'FactName2', '3.14'

declare @Tags table
(
ident tinyint primary key clustered,
TagValue varchar(100)
)

declare
@String varchar(max),
@ident tinyint,
@MaxIdent tinyint,
@FactName varchar(255),
@FactValue varchar(255)

--This is the string that will constitute the disclaimer text they want to chop up
select @String = 'This is a test string. Field 1: , Field 2: '

--Parse the string into its constituent tags
insert into @Tags (ident, TagValue)
select ident, TagValue
from sysmon.dbo.TagParse(@String)

--Get the max ident through @@rowcount so I dont have to actually hit the @Tags table
select
@Ident = 1,
@MaxIdent = @@rowcount

while @ident <= @MaxIdent
begin
--Get the name of the fact name we parsed and assign it to @FactName
select @FactName = TagValue
from @Tags
where ident = @ident

--Get the value from the Fact Set table which corresponds to @FactName
select @FactValue = FactValue
from #FactSet
where FactName = @FactName

--We then replace any instances of the original tag with the value we just got.
--Enclosing the @FactName in <> to simulate the original tag syntax.
select @String = REPLACE(@String, '<' + @FactName + '>', @FactValue)

select @ident = @ident + 1

end

select @string


If anyone who reads this understands what the hell I just did, I'd be happy to hear any ways you think this could be done better.