Showing posts with label SSMS. Show all posts
Showing posts with label SSMS. Show all posts

Tuesday, August 13, 2013

Find What Filegroup a Given Index is on.

If you use the SQL Server system function sp_helpindex, you'll get back table information as well as index information, which includes a concatenated string which tells you what filegroup the index is on. I have an issue right now where we're doing cleanup of tables which had indexes created on the wrong filegroups, so I wanted a way to programmatically determine this information without having to run sp_helpindex on each table. To my surprise, I wasn't really able to find an article online for how to do this. Plenty of articles on moving the indexes to a new filegroup, which were helpful since I have to do that, but not for this initial research stage. With that in mind, I'd like to show you what I came up with.

    select 
        FilegroupName = ds.name, 
        IndexName = i.Name,
        IndexType = i.type_desc,
        TableName = t.Name
    from sys.indexes i
    inner join sys.data_spaces ds
        on i.data_space_id = ds.data_space_id
    inner join sys.tables t
        on i.object_id = t.object_id
    where i.type in (1, 2, 5, 6)

I got this basically by using sp_helptext on sp_helpindex.

Tuesday, March 12, 2013

Non-deplorable use of triggers

So I had a rare case today to actually use a trigger without invoking the wrath of all the DBAs, developers and gnomes that live under my desk.

For starters, let me give my little myopic view of why I (and most SQL developers I know) avoid triggers like The Plague. Up front I admit that I probably don't know all there is to know about triggers, and how best to implement them to keep from pulling your hair out. That said, the existing structures with triggers have one of the two following problems.

Triggers, when used to enforce data integrity involve restricting data in at least one way. There's an older system I have to work on occasionally at work which has a procedure which updates identifiers in about 20 tables; each table containing triggers referencing other tables (some, the same ones which are in the procedure, but some are not). When the procedure fails, tracing down where the error took place in the procedure is just the beginning. You then end up traversing a network of obscure DML enforcing statements across dozens of objects. The end result is that most people who work on the system take extraneous effort to circumvent the triggers if something goes wrong with them rather than dig through them to fix the problem.

The next problem with triggers is that regardless of the use, there is a "hidden" execution cost on every transaction on the table to which the trigger is bound. Imagine you have a table which when built had very low traffic; maybe 50 items added a day. Each time one of those is logged, a trigger fires to update an audit logging table, and additionally update a reference table to make sure that any identifiers which came in are historically corrected as well. Now imagine a few years go by and the developer who wrote the system has retired to become a pet psychiatrist. As years go by (perhaps due to this developer leaving the company) the company grows by leaps and bounds, and now that table is receiving 500,000 DML transactions a day, or 5 million. While there are certainly ways to remedy this situation, it might take a long time to try to realize that there is a trigger on the table.

So again, maybe this is just the way I've grown accustom to doing things, but integrity enforced by table constraints or through procedural logic are the way I prefer to maintain data integrity.

That said, here's the situation I had today. A client was trying to upload some new data to one of our test environments, and throughout the day the data would (according to the client) appear, and later disappear. I'll spare you the hour long heated conversation we had between several teams, but in the end, I undertook the task of monitoring the tables which housed the data to see what they did throughout the day. Initially I set up an SSIS package which would periodically throughout the day dump the data into a logging table via an Agent job. But on my bus ride home, the annoying fact that this still would not necessarily catch the culprit statements persisted. Thinking back on a presentation I'd given on logging tables (Change Tracking, Change Data Capture, etc.) suddenly it occurred to me that a trigger would be a perfect solution to this*.

I set up an AFTER INSERT, UPDATE, DELETE trigger on the two tables with the identifiers I was concerned with and had them dump any DML statements into a logging table. The table would auto increment to prevent any PK violations. The trigger additionally filtered the INSERTED and DELETED tables by the 4 identifiers the client said were popping in and out, and I set up another Agent job to not let the table grow larger than one month of history. Additionally I added a clause in the trigger compilation to only instantiate in development and test environments. There's certainly no reason it could not run in production as well, but the idea I wanted here was maximum visibility into the table's modifications with a minimal footprint on the database. So with small non-intrusive trigger, I was able to log the actions on the table and identify when the records popped in and out of existence.

There are still a few drawbacks to this approach. First of all, maintaining the list of tracked tickers is a very manual process. While this is a rare situation that I'll probably only have to monitor for a few weeks, if this happened again, i'd almost have to re-build the trigger from scratch. Second, ideally I would have the trigger "retire itself" after say a month so if I forgot about it, when I moved on to become a pet psychiatrist the trigger wouldn't get lost in the tangle of 0s and 1s. Also, and this is not really a drawback of a trigger, but rather a limitation, I would have liked if there was a way to pass the calling procedure's information into the trigger in order to further trace the source (something like object_name(@@PROCID), but unfortunately the execution context of the @@PROCID changes to that of the trigger upon it's call.)

In the end however, this seemed like one of those times where it was the right tool for the job. It's refreshing to break out of the tunnel vision which often inadvertently affects ones programming styles and find a legitimate use for something you'd historically written off.



* Change Tracking and CDC were not options due to database restrictions we have.

Tuesday, November 6, 2012

SQL Server Templates

SQL Server Management Studio comes with a bunch of pre-built templates for scripting out procedures, functions, tables, administrative functions and more. All they are essentially are a .sql file with some special syntax in the script where values can be pasted into place. The special syntax is pretty easy to get a handle on, but at first glance can seem a little strange. Here's a sample of what you might see if you were looking at a SQL Server template (minus all the red lines I added).



There are several parts to this. Let's start with the window that's currently being displayed (in the picture, it's the window with the header "Specify Values for Template Parameters"). To get to this window, you must have a query window open in SSMS, and it wouldn't hurt to have a few tags in it (more on that later). In this query window, press ctrl+shift+M. I have no idea what the M is a mnemonic for... para(M)eters? Whatever. Once you're in this view, you'll be able to replace the special notation inside the script with values you input on this screen.

So how do you set that up? You put in tags with the notation
<[Parameter], [Type], [Value]>

[Parameter] is like the identifier for your replacements. Anywhere in your script, any time you have, say tagged in the script, all instances of that Database tag, will be replaced with the value you put in that popup window. This is the only part of the tag which is NOT optional. Think of it as your primary key.

[Type] is really just a hint to yourself about what datatype the script is expecting. Often times it's irellevant. If you're building a new Proc, you might have a tag which says , but you could just have easily put "Int" instead of "Sysname" and it wouldn't care. It's not a data type validator, it's just so that if you have a script which requires data of a certain type you know what to enter. It's worth noting that this is entirely optional. Also, if you change this data type throughout the script, it won't care. Only the first instance of the tag will be used to populate the [Type] column in the popup window.

[Value] is an optional default value in the field. In the example I showed above, more often than not, I'm looking for information_schema.columns, so I set the default of Parameter:SubType to "Columns". If i wanted to make it Routines or Tables, I just have to replace the "Columns" value with the value I want.



When you then click OK, all the values you have are propagated into those replaced tags throughout your script. It's worth mentioning that SQL treats each replacement of a value as a single undoable action, so if you've got a huge parameter list and you accidentally hit OK too soon, all the values you did (or probably didn't) submit will have to be undone one by one till you get back to your original script.



My final note on this is that if you don't have SQL 2012, or do and cant figure out how to use the snippets feature, get something like the SSMS Toolspack. It's free, and comes with a bunch of helpful tools, my favorite of which is a really simple code snippet interface. I've built little 2 - 3 character shortcuts for some of my most commonly used functions, such as the one I listed here. I type "info"+tab and it populate my query window with that code. I've got others as well, "tab"+tab for tables, "IX" for indexes and so on. When you combine this template format with the ability to call these templates up in a fraction of a second, you can really cut down extra typing time a LOT.

Monday, July 30, 2012

Reset SQL Server 2008 SSMS Shortcuts

Recently I've been playing with gVim trying to get into the lightning fast editing that is said to come with using vi as an editor. Problem is, unless I'm mistaken (I'd be happy to be proven wrong here) it's difficult or impossible to add the common tasks I use with SSMS. F5 to execute, remove results pane, new/next window etc.

An addon I was trying out called ViEmu added a lot of great vi functionality to SSMS. It's something that I think users adept at using vi would love. Maybe even an SSMS user could learn to love. But I'm new, and like anyone starting out using vi, it's frustrating, so in lieu of the hundreds of projects looming which need to get done, I guess learning vi will have to wait.

What does this have to do with my posting? This addon (as do many other SSMS addons) remaps many commonly used shortcut. Ctrl-r to hide results pane, alt-w to access window switching, even ctrl-n for new query window. Begin google search for resetting keyboard shortcuts in SSMS.

Turns out for whatever reason, SSMS really doesn't like you being able to easily change your shortcut settings, and a surprising dearth of information was available on the interwebs on this topic. I got it to work, so let me distill what I can for you.

  1. SSMS doesn't store keyboard shortcut information in the registry but rather in local files so uninstallation won't necessarily help.
  2. the files are user specific so they are stored in AppData as opposed to somewhere in the install directory.
  3. SSMS stores keyboard shortcut data in .vsk files (two of which seem to be responsible for the settings concerning SSMS


You have to go to your app data directory. Depending on your version and how your machine(s) is/are setup, your path may look something like this:

C:\Documents and Settings\ShayShay\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell

in there, you're looking for two .vsk files - User.vsk and/or Current.vsk. I found them both in the directory mentioned above, but it sounds as though they may be in slightly different paths in the SSMS app data dir.

Point is, close SSMM, delete those two files (or as i did, cut and paste them to my desktop to be safe) then start up SSMS. Try something simple like select 1 + f5 = ctrl-r (toggle results pane) or really whichever your previously existing shortcut was. If it's still borked, time to go hunting through your app data dirs for other offending .vsk

NOTE: I'm not sure what the result of deleting .vsk files at will are. So far, SSMS looks like it auto-remakes the necessary files wiht default settings (what I wanted) but always be careful snipping out random files. I'd really recommend cutting and pasting them to desktop or something so that if things go horribly wrong, you dont end up in a worse situation than you started in.