Tuesday, December 17, 2013
Amber Bobbles
A lot of the parallel universe in Fringe is reminiscent of "The Peace War" by Vernor Vinge. The first thing that tipped me off was the episode where the twin breaks his brother out of the amber, and the evil fringe people say that they're aware the people are just in stasis and that people would revolt if people knew they could break their loved ones out. After that, I noticed that the other world's Fringe Division is a lot like The Peace Authority in Vinge's book (Fringe, Vinge, coincidence? I think not). Good stuff!
Saturday, November 23, 2013
Sweet Mashed Potatoes
No, it's not mashed potatoes with sugar, it's mashed potatoes using SWEET potatoes! Healthier! Here we go!
2) While the water is boiling, cut up the potatoes roughly into 1-2 inch cubes.
3) Once the water boils, put all the potato chunks in the water and boil for 12-14 minutes.
4) Drain the water and potatoes through a colander.
5) Put the potatoes in a big bowl, and start mashing.
6) Add 3-4 Tbs smart balance, ranch mix, and about 1/3rd a cup of milk (start small, but adjust to taste and texture).
7) Keep mashing
8) EAT!
In greedy mints: - 4 Large Sweet Potatoes - Smart Balance Lite (or butter if you're a fatty) - Skim Milk - 1 Packet Hidden Valley Ranch mix1) In a large pot boil enough water to cover 3-4 large sweet potatoes (I used the yellowish white ones, but this would probably work with orange ones too)
2) While the water is boiling, cut up the potatoes roughly into 1-2 inch cubes.
3) Once the water boils, put all the potato chunks in the water and boil for 12-14 minutes.
4) Drain the water and potatoes through a colander.
5) Put the potatoes in a big bowl, and start mashing.
6) Add 3-4 Tbs smart balance, ranch mix, and about 1/3rd a cup of milk (start small, but adjust to taste and texture).
7) Keep mashing
8) EAT!
Labels:
easy,
healthy,
low fat,
mashed,
potato,
potatoes,
ranch,
sweet potato,
sweet potatoes,
Thanksgiving
Tuesday, August 13, 2013
Find What Filegroup a Given Index is on.
If you use the SQL Server system function sp_helpindex, you'll get back table information as well as index information, which includes a concatenated string which tells you what filegroup the index is on.
I have an issue right now where we're doing cleanup of tables which had indexes created on the wrong filegroups, so I wanted a way to programmatically determine this information without having to run sp_helpindex on each table.
To my surprise, I wasn't really able to find an article online for how to do this. Plenty of articles on moving the indexes to a new filegroup, which were helpful since I have to do that, but not for this initial research stage. With that in mind, I'd like to show you what I came up with.
I got this basically by using sp_helptext on sp_helpindex.
select FilegroupName = ds.name, IndexName = i.Name, IndexType = i.type_desc, TableName = t.Name from sys.indexes i inner join sys.data_spaces ds on i.data_space_id = ds.data_space_id inner join sys.tables t on i.object_id = t.object_id where i.type in (1, 2, 5, 6)
I got this basically by using sp_helptext on sp_helpindex.
Labels:
2005,
2008,
2012,
filegroup,
find,
Find what filegroup,
How to,
index,
index is on,
indexes,
indices,
SQL,
SQL Server,
SSMS,
TSQL
Wednesday, August 7, 2013
Ninja 300, Pros & Cons
I wanted to finally get around to writing a review of the new Kawasaki Ninja 300 I got earlier this year.
Super brief history on me, the Ninja 300 (known also as the EX300) is my first motorcycle. My wife got me a Living Social deal last year for a Motorcycle safety course for my license endorsement, and it was all over after that. After weeks of fervid searching and balancing performance vs responsibility vs cost, I opted to get the new EX300 for my first bike. Having no experience with other bikes other than single rides on friends bikes, this is really my impressions of owning this and only this bike. I'll talk about the pros and cons of the bike, and where I plan to, and might suggest going next.
Overall, functionally, I have very few complaints about the EX300. Most of them have to do with the fact that it is, indeed, a smaller bike, and the physical limitations associated with that. I'll discuss those issues later, but first lets talk about objective facts of the bike.
The first thing I noticed about the EX300 is it's light. I think its about 400 lbs with all the gas and fluids in it (wet weight). It's also small. The bike is so easy to maneuver. I've taken it on canyon roads and quick turns like roundabouts. There's a particular route I like to take on the weekends around where I live which has a roundabout in the middle of the road, maybe 20 yards in circumference which I take around 30-40 and when you throw the bike to the right and then the left, it responds impeccably; like it wants to turn quickly. Once you exit the turn, the engine responds quickly as well to give you power when you open the throttle. There's about a quarter second delay from when you open the throttle to when the engine really gets the message you said "go", but then it obeys. Any speeds under 50 mph and you can get pretty good acceleration out of the engine. Torque from about 4k to 12k RPM is pretty stable which is nice so you don't have to down-shift to get power. Given that you only get about 8 mph out of each gear, that's good because you could potentially have to do a lot more shifting to make sure you're in the power band for your speed. The riding position stock is very "standard", not sporty.
The handlebars are pretty high which means when you're riding around, you're sitting pretty much straight up with a small lean forward. My friend who rides a cruiser (who used to ride a Yamaha R1) says he prefers the upright posture much more as the racing tuck gets uncomfortable very quickly. I've ridden his R1 briefly and the position on the Yamaha is absolutely leaned over and different from the EX300. Personally, I rather like the tuck position. I think I feel faster and more agile. And since highway riding really pushes the engine to the limits, I end up tucked just to eek out every bit of power and aerodynamics I can.
And speaking of highway riding, this is an area the bike really struggles in. As compared to mountains and canyons where you're really going between 30 and 50 the whole time and the bike truly shines, the highway you're trying to go between 65 and 90, which is all in 6th gear. You can get on the highway pretty well as the bike accelerates from 0 to 60 pretty quickly, but as you go faster, as with any vehicle, your acceleration exponentially decreases. You have to plan you moves well in advance. Take, for instance, passing a semi. You're going 65 MPH and need to get around the Semi because he can't see you. You can't just change lanes, open the throttle and get past. You have to get in the passing lane, power down a little to open a gap between you and the back of the semi so you have a "runway" to pick up your speed. Open the throttle and 65 and the bike hardly responds. It will pick up speed, but very gradually. You have to use the "runway" you opened up to really pick up speed so that by the time you enter the trucks blind spots, you're already going 80+. Also since the engine lacks grunt at high speeds, to maintain a fast velocity, as I mentioned before, you end up in a tuck a lot of the time. Personally, I don't mind that position, but it's a bit more of a necessity rather than just looking and feeling cool or being agile on the bike.
Now moving beyond the functionality of the bike, I'd like to talk about simply owning the bike and what doors it opens, or are closed, to exploring the bikes functionality. As I mentioned at the start of the article, once I rode on two wheels, I didn't want to do anything else. Maybe it's just the honeymoon phase with me and my bike, but all I want to do now is ride, and ride hard. I've already dropped the bike once (took a long turn at about 45 and hit some gravel). Fairings got scratched and had to replace the shift lever, but other than that, the bike was fine. I took my protective gear seriously so I was basically fine to. What's more is it didn't turn me off to riding in the least. What I really want to do is take my bike to the track where I can push the limits of the bike in a controlled environment. I'm also a poor computer programmer so my budget for motorcycle parts and equipment is minimal. To really get serious about track riding or possibly racing, I'd need to make some mods to the bike. New bodywork, safety wiring, tires, etc. Not to mention fixing the thing if (when) it goes down. So how much effort do I want to put into transforming the bike? Well another thing about the EX300 is that (and perhaps just because it's a new bike) I haven't found many (any actually, but that doesn't mean they don't exists) which allow 300's to race. There are 250 classes and 600 classes, but the 250's don't seem to allow the 300s, unlike the 600 classes which have some wiggle room for things like the ZX-6R or Daytona 657R. Which is a shame, because one of the things I really like about the EX300 is I can push the bike to it's limits without shitting my pants. Racing a 300 would be awesome for me, but that doesn't appear to be in the cards right now. So I could get an old 250, work on that and modify it for a race bike, but if I'm going to get a new bike, I might as well get a 600, or something else with enough oomph to satiate my need for speed. The problem comes back to that it's an oddball displacement size, and the future of it's acceptance is uncertain.
So what are the takeaways here? If you're a beginner rider and didn't take to motorcycle riding like a fish to water, this might be a good bet. It also has the benefit of keeping me out of too much trouble. I go fast on the 300 often, and if I had a 600, I'd push it just as hard. It lets me have fun, but self-limits. Difficult though that may be to accept, that's actually a really good thing. I've even given the 300 too much power and almost lost control of it. Were I on a bigger bike, the bike would be in a fence. I hate using the phrase "It's a good starter bike" because that's like saying a child is "spirited". Lets face it, the kid sucks and the bike is weak. But at the end of the day, it's still a motorcycle, and it has more than enough power to get you from A-Z, quickly, or wrap you around a tree if you fail to respect its power. With that in mind, if you want a more reserved but fun riding experience with a sporty feel (i.e. not standard or cruiser), this is the bike for you. But if you took the MSF class and by the end just wanted to get out on the road and open up that throttle, you'll outgrow this bike too quickly.
In short: Fantastic bike, but you'll probably want more, and quickly.
Super brief history on me, the Ninja 300 (known also as the EX300) is my first motorcycle. My wife got me a Living Social deal last year for a Motorcycle safety course for my license endorsement, and it was all over after that. After weeks of fervid searching and balancing performance vs responsibility vs cost, I opted to get the new EX300 for my first bike. Having no experience with other bikes other than single rides on friends bikes, this is really my impressions of owning this and only this bike. I'll talk about the pros and cons of the bike, and where I plan to, and might suggest going next.
Overall, functionally, I have very few complaints about the EX300. Most of them have to do with the fact that it is, indeed, a smaller bike, and the physical limitations associated with that. I'll discuss those issues later, but first lets talk about objective facts of the bike.
The first thing I noticed about the EX300 is it's light. I think its about 400 lbs with all the gas and fluids in it (wet weight). It's also small. The bike is so easy to maneuver. I've taken it on canyon roads and quick turns like roundabouts. There's a particular route I like to take on the weekends around where I live which has a roundabout in the middle of the road, maybe 20 yards in circumference which I take around 30-40 and when you throw the bike to the right and then the left, it responds impeccably; like it wants to turn quickly. Once you exit the turn, the engine responds quickly as well to give you power when you open the throttle. There's about a quarter second delay from when you open the throttle to when the engine really gets the message you said "go", but then it obeys. Any speeds under 50 mph and you can get pretty good acceleration out of the engine. Torque from about 4k to 12k RPM is pretty stable which is nice so you don't have to down-shift to get power. Given that you only get about 8 mph out of each gear, that's good because you could potentially have to do a lot more shifting to make sure you're in the power band for your speed. The riding position stock is very "standard", not sporty.
The handlebars are pretty high which means when you're riding around, you're sitting pretty much straight up with a small lean forward. My friend who rides a cruiser (who used to ride a Yamaha R1) says he prefers the upright posture much more as the racing tuck gets uncomfortable very quickly. I've ridden his R1 briefly and the position on the Yamaha is absolutely leaned over and different from the EX300. Personally, I rather like the tuck position. I think I feel faster and more agile. And since highway riding really pushes the engine to the limits, I end up tucked just to eek out every bit of power and aerodynamics I can.
And speaking of highway riding, this is an area the bike really struggles in. As compared to mountains and canyons where you're really going between 30 and 50 the whole time and the bike truly shines, the highway you're trying to go between 65 and 90, which is all in 6th gear. You can get on the highway pretty well as the bike accelerates from 0 to 60 pretty quickly, but as you go faster, as with any vehicle, your acceleration exponentially decreases. You have to plan you moves well in advance. Take, for instance, passing a semi. You're going 65 MPH and need to get around the Semi because he can't see you. You can't just change lanes, open the throttle and get past. You have to get in the passing lane, power down a little to open a gap between you and the back of the semi so you have a "runway" to pick up your speed. Open the throttle and 65 and the bike hardly responds. It will pick up speed, but very gradually. You have to use the "runway" you opened up to really pick up speed so that by the time you enter the trucks blind spots, you're already going 80+. Also since the engine lacks grunt at high speeds, to maintain a fast velocity, as I mentioned before, you end up in a tuck a lot of the time. Personally, I don't mind that position, but it's a bit more of a necessity rather than just looking and feeling cool or being agile on the bike.
Now moving beyond the functionality of the bike, I'd like to talk about simply owning the bike and what doors it opens, or are closed, to exploring the bikes functionality. As I mentioned at the start of the article, once I rode on two wheels, I didn't want to do anything else. Maybe it's just the honeymoon phase with me and my bike, but all I want to do now is ride, and ride hard. I've already dropped the bike once (took a long turn at about 45 and hit some gravel). Fairings got scratched and had to replace the shift lever, but other than that, the bike was fine. I took my protective gear seriously so I was basically fine to. What's more is it didn't turn me off to riding in the least. What I really want to do is take my bike to the track where I can push the limits of the bike in a controlled environment. I'm also a poor computer programmer so my budget for motorcycle parts and equipment is minimal. To really get serious about track riding or possibly racing, I'd need to make some mods to the bike. New bodywork, safety wiring, tires, etc. Not to mention fixing the thing if (when) it goes down. So how much effort do I want to put into transforming the bike? Well another thing about the EX300 is that (and perhaps just because it's a new bike) I haven't found many (any actually, but that doesn't mean they don't exists) which allow 300's to race. There are 250 classes and 600 classes, but the 250's don't seem to allow the 300s, unlike the 600 classes which have some wiggle room for things like the ZX-6R or Daytona 657R. Which is a shame, because one of the things I really like about the EX300 is I can push the bike to it's limits without shitting my pants. Racing a 300 would be awesome for me, but that doesn't appear to be in the cards right now. So I could get an old 250, work on that and modify it for a race bike, but if I'm going to get a new bike, I might as well get a 600, or something else with enough oomph to satiate my need for speed. The problem comes back to that it's an oddball displacement size, and the future of it's acceptance is uncertain.
So what are the takeaways here? If you're a beginner rider and didn't take to motorcycle riding like a fish to water, this might be a good bet. It also has the benefit of keeping me out of too much trouble. I go fast on the 300 often, and if I had a 600, I'd push it just as hard. It lets me have fun, but self-limits. Difficult though that may be to accept, that's actually a really good thing. I've even given the 300 too much power and almost lost control of it. Were I on a bigger bike, the bike would be in a fence. I hate using the phrase "It's a good starter bike" because that's like saying a child is "spirited". Lets face it, the kid sucks and the bike is weak. But at the end of the day, it's still a motorcycle, and it has more than enough power to get you from A-Z, quickly, or wrap you around a tree if you fail to respect its power. With that in mind, if you want a more reserved but fun riding experience with a sporty feel (i.e. not standard or cruiser), this is the bike for you. But if you took the MSF class and by the end just wanted to get out on the road and open up that throttle, you'll outgrow this bike too quickly.
In short: Fantastic bike, but you'll probably want more, and quickly.
Monday, June 10, 2013
When SQL Output Makes Me Cry
I'm a big proponent of the SQL OUTPUT clause introduced in SQL 2008. Having the ability to output the rows of a given DML statement has a lot of great uses. You can log DML actions without the need for triggers or costly features like CDC, you can capture data moving around inside a procedure, or debug a one-off script you're running. I did however just run into a situation with makes me want to cry.
Typically with the DML procedures I work with, we like to put all the data manipulation, massaging and preparation at the start, and then at the end persist all this data to the presentation tables. While I always realized why, it became especially apparent here. If it's not clear, the benefit here is that you have all your staging done in temporary objects and if something goes wrong, you can break out and not have to worry about anything live being affected. You also avoid convoluted schemes of transactions and try catch blocks. Also you know where to look for your DML statements modifying presentation tables. Now this doesn't always work, but I think for the most part, a small amount of effort can fit this mold. One big problem comes with debugging a procedure. You need to see data flow through the procedure in order to test individual parts and see where things are going wrong. If you have a presentation table DML action early in the procedure, if you want to run it in any environment but a development environment, you likely wont have the permissions to affect the tables (and if you do, you still probably don't want to). This is easy to get around by changing the
Typically with the DML procedures I work with, we like to put all the data manipulation, massaging and preparation at the start, and then at the end persist all this data to the presentation tables. While I always realized why, it became especially apparent here. If it's not clear, the benefit here is that you have all your staging done in temporary objects and if something goes wrong, you can break out and not have to worry about anything live being affected. You also avoid convoluted schemes of transactions and try catch blocks. Also you know where to look for your DML statements modifying presentation tables. Now this doesn't always work, but I think for the most part, a small amount of effort can fit this mold. One big problem comes with debugging a procedure. You need to see data flow through the procedure in order to test individual parts and see where things are going wrong. If you have a presentation table DML action early in the procedure, if you want to run it in any environment but a development environment, you likely wont have the permissions to affect the tables (and if you do, you still probably don't want to). This is easy to get around by changing the
insert into MyPresentationTable (ColumnA, ColumnB) select ColumnA, ColumnB from #TempTable
to something like select ColumnA, ColumnB into #MyTempPresentationTable from #TempTable
Now you have a temp object which holds the temporary data you need, and you can move on with debugging.
However, with the advent of the OUTPUT clause, you can now make selections from things like Merge statements, output into statements, and so forth. The case I have now is that a DML action is performed way early in the procedure, via a merge statement, and the OUTPUT of that is then used to feed a temp table used in just about every subsequent statement in the procedure. In the procedure, the $action attribute is used in conjunction with deleted.* and inserted.*. Trying to re-create the contents of a merge statement is difficult enough to do by hand, but when the situational outputs of a merge statement are then used for everything else, running this in an environment where you can't just run it and fill up objects to debug is a nightmare.
I'm a big fan of the OUTPUT statement, and actually a pretty big fan of the MERGE statement as well, but please, consider how it's being used. New features are wonderful, but they don't have the time tested trial and error of longer standing methods. You have to give more thought to how you use them because dealing with them and debugging them is going to raise issues which don't typically come up with more core features.
Labels:
bad practice,
Code,
feature,
MERGE,
new features,
OUTPUT,
SQL,
SQL 2008
Sunday, March 17, 2013
Motorcycle Helmets
I recently completed the MSF BRC to get my motorcycle endorsement and, for the better, was told in excruciating detail how important protective gear it; specifically helmets. As if it wasn't obvious enough, skimping on a helmet is NOT the place you want to save money on gear. Brain injuries are the #1 cause of death in motorcycle accidents. Furthermore, DOT certification is a very minimalistic standard. The 50 dollar helmets you buy at Wal-Mart are typically DOT certified. To get a DOT certification, it pretty much has to survive an impact of 15 miles per hour. When was the last time you were driving 15 miles per hour? I can bicycle faster than that. Snell certification is a much higher standard of helmet certification. Now, any helmet is better than no helmet. You don't necessarily have to get an Arai Corsair-V, but just plan for what you plan to be doing. Even riding from Boulder to Superior you're going to be getting up to around 60 miles per hour. If something goes wrong, you probably want your dome safe.
Now another thing about helmets is if you take a spill on one, get a new one. There's a lot of damage or structural instability which can be obscured by the paint job and the outer shell. I recently was reading a Yelp review of a local motorcycle shop where a customer gave the shop a rating of zero because he dropped a helmet on the floor and they made him buy it. He started ranting about how if the helmet is Snell approved and he should trust his life to it, how they could say it's broken after a single drop. This customer viewed it as the shop being assholes.
Nothing could be further from the truth. It's true, dropping a high grade helmet on the ground will probably not make a difference. The key word there is the word "probably". It IS however, completely plausible and possible however that the dropping of the helmet damaged it and they would be grossly negligent in selling that to a customer as a way to keep their head safe. Secondly, the customer said he grabbed it off the shelf and then dropped it. Shops keep the cheap shit out on the floor, so they probably knew the likelihood of it being damaged was even worse. I understand it sucks having to buy a helmet you dropped, but the fact of the matter is you've effectively broken the helmet. If you walked into the shop and saw someone drop a helmet, would you then buy it? Of course not!
The idea behind writing off a helmet after a single fall isn't to try to scam people into buying more helmets. A helmet is a safety measure when you get in a crash. When your head hits the median at 60 miles an hour and you get up and walk away, thank your lucky stars you bought a good helmet from a supplier who didn't drop it on the floor, and then go buy another one. Don't keep riding around with a damaged piece of equipment.
Now another thing about helmets is if you take a spill on one, get a new one. There's a lot of damage or structural instability which can be obscured by the paint job and the outer shell. I recently was reading a Yelp review of a local motorcycle shop where a customer gave the shop a rating of zero because he dropped a helmet on the floor and they made him buy it. He started ranting about how if the helmet is Snell approved and he should trust his life to it, how they could say it's broken after a single drop. This customer viewed it as the shop being assholes.
Nothing could be further from the truth. It's true, dropping a high grade helmet on the ground will probably not make a difference. The key word there is the word "probably". It IS however, completely plausible and possible however that the dropping of the helmet damaged it and they would be grossly negligent in selling that to a customer as a way to keep their head safe. Secondly, the customer said he grabbed it off the shelf and then dropped it. Shops keep the cheap shit out on the floor, so they probably knew the likelihood of it being damaged was even worse. I understand it sucks having to buy a helmet you dropped, but the fact of the matter is you've effectively broken the helmet. If you walked into the shop and saw someone drop a helmet, would you then buy it? Of course not!
The idea behind writing off a helmet after a single fall isn't to try to scam people into buying more helmets. A helmet is a safety measure when you get in a crash. When your head hits the median at 60 miles an hour and you get up and walk away, thank your lucky stars you bought a good helmet from a supplier who didn't drop it on the floor, and then go buy another one. Don't keep riding around with a damaged piece of equipment.
Labels:
Arai,
Colorado,
Helmet,
Motorcycle,
Powersports,
Safety,
Yelp
Tuesday, March 12, 2013
Non-deplorable use of triggers
So I had a rare case today to actually use a trigger without invoking the wrath of all the DBAs, developers and gnomes that live under my desk.
For starters, let me give my little myopic view of why I (and most SQL developers I know) avoid triggers like The Plague. Up front I admit that I probably don't know all there is to know about triggers, and how best to implement them to keep from pulling your hair out. That said, the existing structures with triggers have one of the two following problems.
Triggers, when used to enforce data integrity involve restricting data in at least one way. There's an older system I have to work on occasionally at work which has a procedure which updates identifiers in about 20 tables; each table containing triggers referencing other tables (some, the same ones which are in the procedure, but some are not). When the procedure fails, tracing down where the error took place in the procedure is just the beginning. You then end up traversing a network of obscure DML enforcing statements across dozens of objects. The end result is that most people who work on the system take extraneous effort to circumvent the triggers if something goes wrong with them rather than dig through them to fix the problem.
The next problem with triggers is that regardless of the use, there is a "hidden" execution cost on every transaction on the table to which the trigger is bound. Imagine you have a table which when built had very low traffic; maybe 50 items added a day. Each time one of those is logged, a trigger fires to update an audit logging table, and additionally update a reference table to make sure that any identifiers which came in are historically corrected as well. Now imagine a few years go by and the developer who wrote the system has retired to become a pet psychiatrist. As years go by (perhaps due to this developer leaving the company) the company grows by leaps and bounds, and now that table is receiving 500,000 DML transactions a day, or 5 million. While there are certainly ways to remedy this situation, it might take a long time to try to realize that there is a trigger on the table.
So again, maybe this is just the way I've grown accustom to doing things, but integrity enforced by table constraints or through procedural logic are the way I prefer to maintain data integrity.
That said, here's the situation I had today. A client was trying to upload some new data to one of our test environments, and throughout the day the data would (according to the client) appear, and later disappear. I'll spare you the hour long heated conversation we had between several teams, but in the end, I undertook the task of monitoring the tables which housed the data to see what they did throughout the day. Initially I set up an SSIS package which would periodically throughout the day dump the data into a logging table via an Agent job. But on my bus ride home, the annoying fact that this still would not necessarily catch the culprit statements persisted. Thinking back on a presentation I'd given on logging tables (Change Tracking, Change Data Capture, etc.) suddenly it occurred to me that a trigger would be a perfect solution to this*.
I set up an AFTER INSERT, UPDATE, DELETE trigger on the two tables with the identifiers I was concerned with and had them dump any DML statements into a logging table. The table would auto increment to prevent any PK violations. The trigger additionally filtered the INSERTED and DELETED tables by the 4 identifiers the client said were popping in and out, and I set up another Agent job to not let the table grow larger than one month of history. Additionally I added a clause in the trigger compilation to only instantiate in development and test environments. There's certainly no reason it could not run in production as well, but the idea I wanted here was maximum visibility into the table's modifications with a minimal footprint on the database. So with small non-intrusive trigger, I was able to log the actions on the table and identify when the records popped in and out of existence.
There are still a few drawbacks to this approach. First of all, maintaining the list of tracked tickers is a very manual process. While this is a rare situation that I'll probably only have to monitor for a few weeks, if this happened again, i'd almost have to re-build the trigger from scratch. Second, ideally I would have the trigger "retire itself" after say a month so if I forgot about it, when I moved on to become a pet psychiatrist the trigger wouldn't get lost in the tangle of 0s and 1s. Also, and this is not really a drawback of a trigger, but rather a limitation, I would have liked if there was a way to pass the calling procedure's information into the trigger in order to further trace the source (something like object_name(@@PROCID), but unfortunately the execution context of the @@PROCID changes to that of the trigger upon it's call.)
In the end however, this seemed like one of those times where it was the right tool for the job. It's refreshing to break out of the tunnel vision which often inadvertently affects ones programming styles and find a legitimate use for something you'd historically written off.
* Change Tracking and CDC were not options due to database restrictions we have.
For starters, let me give my little myopic view of why I (and most SQL developers I know) avoid triggers like The Plague. Up front I admit that I probably don't know all there is to know about triggers, and how best to implement them to keep from pulling your hair out. That said, the existing structures with triggers have one of the two following problems.
Triggers, when used to enforce data integrity involve restricting data in at least one way. There's an older system I have to work on occasionally at work which has a procedure which updates identifiers in about 20 tables; each table containing triggers referencing other tables (some, the same ones which are in the procedure, but some are not). When the procedure fails, tracing down where the error took place in the procedure is just the beginning. You then end up traversing a network of obscure DML enforcing statements across dozens of objects. The end result is that most people who work on the system take extraneous effort to circumvent the triggers if something goes wrong with them rather than dig through them to fix the problem.
The next problem with triggers is that regardless of the use, there is a "hidden" execution cost on every transaction on the table to which the trigger is bound. Imagine you have a table which when built had very low traffic; maybe 50 items added a day. Each time one of those is logged, a trigger fires to update an audit logging table, and additionally update a reference table to make sure that any identifiers which came in are historically corrected as well. Now imagine a few years go by and the developer who wrote the system has retired to become a pet psychiatrist. As years go by (perhaps due to this developer leaving the company) the company grows by leaps and bounds, and now that table is receiving 500,000 DML transactions a day, or 5 million. While there are certainly ways to remedy this situation, it might take a long time to try to realize that there is a trigger on the table.
So again, maybe this is just the way I've grown accustom to doing things, but integrity enforced by table constraints or through procedural logic are the way I prefer to maintain data integrity.
That said, here's the situation I had today. A client was trying to upload some new data to one of our test environments, and throughout the day the data would (according to the client) appear, and later disappear. I'll spare you the hour long heated conversation we had between several teams, but in the end, I undertook the task of monitoring the tables which housed the data to see what they did throughout the day. Initially I set up an SSIS package which would periodically throughout the day dump the data into a logging table via an Agent job. But on my bus ride home, the annoying fact that this still would not necessarily catch the culprit statements persisted. Thinking back on a presentation I'd given on logging tables (Change Tracking, Change Data Capture, etc.) suddenly it occurred to me that a trigger would be a perfect solution to this*.
I set up an AFTER INSERT, UPDATE, DELETE trigger on the two tables with the identifiers I was concerned with and had them dump any DML statements into a logging table. The table would auto increment to prevent any PK violations. The trigger additionally filtered the INSERTED and DELETED tables by the 4 identifiers the client said were popping in and out, and I set up another Agent job to not let the table grow larger than one month of history. Additionally I added a clause in the trigger compilation to only instantiate in development and test environments. There's certainly no reason it could not run in production as well, but the idea I wanted here was maximum visibility into the table's modifications with a minimal footprint on the database. So with small non-intrusive trigger, I was able to log the actions on the table and identify when the records popped in and out of existence.
There are still a few drawbacks to this approach. First of all, maintaining the list of tracked tickers is a very manual process. While this is a rare situation that I'll probably only have to monitor for a few weeks, if this happened again, i'd almost have to re-build the trigger from scratch. Second, ideally I would have the trigger "retire itself" after say a month so if I forgot about it, when I moved on to become a pet psychiatrist the trigger wouldn't get lost in the tangle of 0s and 1s. Also, and this is not really a drawback of a trigger, but rather a limitation, I would have liked if there was a way to pass the calling procedure's information into the trigger in order to further trace the source (something like object_name(@@PROCID), but unfortunately the execution context of the @@PROCID changes to that of the trigger upon it's call.)
In the end however, this seemed like one of those times where it was the right tool for the job. It's refreshing to break out of the tunnel vision which often inadvertently affects ones programming styles and find a legitimate use for something you'd historically written off.
* Change Tracking and CDC were not options due to database restrictions we have.
Labels:
2008,
Dynamic SQL,
SQL,
SQL 2008,
SQL 2012,
SQL Server,
SSMS,
T-SQL,
Transact SQL,
Trigger,
Triggers,
TSQL
Tuesday, January 29, 2013
Trade like a bro.
DP Ameristar is proud to announce its new newest venture catering financial education and services at non-professional investors.
Introducing DP Ameribro. Our advanced search tools allow commonly misused term like “what’s up with my shit?” into searches for past, present and projected future earnings on your portfolio.
Fed up with complicated ratios and statistics? Our proprietary rating of bro points takes the suck out of success! Equities are easily rated on a three point scale of “WTF” to “Meh” to “Fuckin’ Sweet”. Additionally, we have partnered with some of the largest financial services companies to define “bro-specific” sectors and industries including Workout Accessories, High Proof Alcohol Products , Kanye West Related Consumer Items and Ringtones.
Live like a bro, Trade like a pro. DP Ameristar
Hoboken, FL
All rights reserved.
Introducing DP Ameribro. Our advanced search tools allow commonly misused term like “what’s up with my shit?” into searches for past, present and projected future earnings on your portfolio.
Fed up with complicated ratios and statistics? Our proprietary rating of bro points takes the suck out of success! Equities are easily rated on a three point scale of “WTF” to “Meh” to “Fuckin’ Sweet”. Additionally, we have partnered with some of the largest financial services companies to define “bro-specific” sectors and industries including Workout Accessories, High Proof Alcohol Products , Kanye West Related Consumer Items and Ringtones.
Live like a bro, Trade like a pro. DP Ameristar
Hoboken, FL
All rights reserved.
Subscribe to:
Posts (Atom)