Tag Archives: Development

SSRS Videos

I posted 2 new vids this week on SSRS.  The first is a data-driven subscription primer that’ll get you up and running with not only what it takes to set it up, but the different methods you have for sending the results exactly the way you want them.  I even go out into Profiler to show you the effect it has on the system.  It’s about 30mins, and you can find it here:  http://midnightdba.itbookworm.com/VidPages/SSRSDataDrivenSubscription/SSRSDataDrivenSubscription.aspx

The 2nd vid is about deploying reports from VS.  This is a common question I get from noobs and because it’s not obvious how it works, I went ahead and filmed it.  I also cover one of the common problems you have when deploying reports using this method and how to solve it.  It’s about 4mins and can be found here:
http://midnightdba.itbookworm.com/VidPages/SSRSDeployReports/SSRSDeployReports.aspx

 

 

Partitioning and the free lunch

Quite often there’s no benefit to a feature if you’re not using it, but this time that’s not the case. Even if you’re not using partitioning you can still take advantage of it in a really cool way.

Let’s say you’ve got a table that you load every night, and you want to move the current data to a new table and truncate the current one. Here’s how you’d do something like that today.

1. Select * into T2 from T1
2. Insert into T2 select * from T1
3. Rename T1 to T2. Then recreate T1.

But let’s face it, none of those choices are really ideal as they take time. Though of those I would choose 1, 3, 2 in order. I don’t like renaming objects because the new object now has a new ID, and it’s hard to track them over time if they keep changing IDs. So those are your choices if you’re not taking advantage of partitioning.

If you ARE taking advantage of partitioning however, you’ve got a different choice. Now you can switch the data out of the current table and into the 2nd w/o any data actually moving at all. And the best part is that you don’t even have to have partitioned your tables at all. That’s right… your tables are automatically partitioned as a single-partition table so you’re using partitioning whether you know it or not. So that means that in order to truncate T1 and move its data to T2 you only have to do this:

alter table T1
switch partition 1 to T2

Now there are some caveats, but as with most things it’s not too bad. The biggest 3 are:
1. The table structures have to be identical.
2. They have to be on the same filegroup. So the partition you’re switching out and the one you’re switching it into have to be on the same filegroup.
3. The target table/partition has to be empty.

There are some others that are advised, but those 3 are the only show STOPPERS. And once you’ve got the data into the new table, you can then move it to a new filegroup, or do whatever with it you like.

So anyway, it’s a wicked cool method for getting rid of your current data even though you’re not officially partitioning your table.

A cleaner way to do dynamic SQL

Here’s something I wrote on SSC a few yrs ago and I thought I’d rewrite it here to make sure it’s fresh on everyone’s mind.
One of the things I hate the most about dynamic SQL is all those single quotes you have to count when building strings. I’m a DBA so we’re not very bright to begin with, and keeping track of the piles of single quotes makes my head spin. And forget about trying to insert something in the middle of all that garbage.
The example we’ll be working with today is a simple query that creates file move statements so you can move a DB with a lot of files to a new drive easily. I’ll throw in another component here in a minute.
So here’s the T-SQL we’ll be working with. It’s not as bad as some I’ve seen, but it’ll get the point across.

SELECT 'alter database CPY modify file (name = [' +
NAME + '] , FILENAME = ''' +
'P:\' + NAME + '\' + physical_name + ''')' + 'GO'
FROM sys.database_files

So you see here we’re stacking up these single quotes inside there so we can get the actual quotes printed in the resultset. Anybody who konws me knows I’m a big fan of code that writes code. And everything doesn’t have to be fully automated. There’s really something to be said for scripting the code generatin and then making minor changes as needed, or just running it by hand. There are a few things I’d rather not run automatically, but just paste the code into the editor and run it manually. Something like this is one of those things. So anyway, let’s go about getting rid of all those single quotes that are stacked up inside there.

Declare @SQ char(1)
Set @SQ = char(39)

SELECT 'alter database BHCSECLPCPY modify file (name = [' +
NAME + '] , FILENAME = ' + @SQ +
'P:\' + NAME + '\' + physical_name + @SQ + ')' + 'GO'
FROM sys.database_files 

Ok, explaining this a little. The magic happens in the 1st 2 lines. Set a var as a char(1), then set that equal to char(39).
in ASCII char(39) is a single quote. So now anywhere you have those double and triple quotes, just replace them with a @SQ. It’s much easier to see where your SQs are supposed to go, and much easier to add stuff in the middle of it.

Notice also that I put GO stmts at the end. It’s not necessary for this operation, but for some it is. And even when it’s not there are just some people who really like to see them in there. This is another one of those semi-automated processes I was talking about. If you want your GOs on a separate line, it just takes a little regex in SSMS. Start by pasting the results of the above query into a separate SSMS query window. Here’s what it’ll look like.

Now we’re just gonna do a little scrubbing in SSMS.

For this we’re going to choose regex (regular expressions) because they’re cool and super useful for these semi-automated functions.
So in your code window in SSMS hit ctrl+H to open the replace window.
Then make sure your settings look like this.

Once that finishes, and it’ll be wicked fast, your results will now look like this:

Now I’ll explain what’s important. You’re searching for the word GO, and replacing it with \nGO.
\n is the regex code for ‘new line’. There’s nothing more to it than that.

And one more thing for completion. If you wanna see what other ASCII chars there are, I’ve got a little script that’ll show you.

/*
Author: Sean McCown
Date: 06/05/2003
Lists all char codes so you can find the code for the char you're looking for.
Tells that char(39) is ', etc.  Very nice to have around.
*/

Declare @Chars Table
	(
	Code varchar(10),
	Char varchar(4)
	)

Declare @i int
Set @i = 0

While @i < 256

	BEGIN
Insert @Chars 
Select 'Char(' + cast(@i as varchar(4)) + ')', char(@i)
Set @i = @i + 1
	END

Select * from @Chars

OK guys... that's my little corner of the world today. It's not widely useful but it does come in handy from time to time.

A Round of Patching

I just finished an interesting email thread with a user that’s not at all an uncommon scenario.  They were just installing a CU to some of their SQL boxes, and there was evidently a new bug that was introduced into that CU that causes their SSRS reports to rendor incorrectly.  The sad thing is that this is the kind of thing that should have been caught in testing.  Now of course, I don’t expect them to find everything during testing, but the simple rendering of reports should have caught early on.  Because the second they turned their reporting structure back on, report rendering started messing up.  Now, he assures me that they tested it, but what exactly did they test?  Did they do any user testing at all or did they just test the install of the patch itself?  I see that quite often.  I see shops claim to do patch testing, but all they’re concerned with is whether the install itself will fail.

Most of the time I blame managers and companies in general for his because the IT staff knows what’s at stake more than anyone but they’re not given the proper time and resources it takes to properly test.  Managers always claim to know the business needs better, yet they never seem to fully grasp the full implications of putting untested code into production.  All they want to know is will this bring down my server.  Unfortunately that always means something different to them than it does to us.  What they mean is will this fail and put us into an emergency situation?  And the answer is probably not.  However, disabling a critical piece of your business is putting you into an emergency situation.  And in this case they’re lucky it’s just report rendering.  What if it had been something more serious that caused some kind of logical corruption in their processing?

I would say that quite often the biggest problems caused by hotfixes are performance-related.  And by not testing a CU or hotfix properly you could bring your entire business to its knees.  You can slow your website down so much that nobody can get any real work done.  And what does that do for the reliability and confidence of your business?  It would be really nice sometimes if we in IT didn’t have to rely solely on people who know nothing about it to get our jobs done properly. 

And what’s amazing to me is that companies today still, after all this time and all the mistakes, don’t think it’s necessary to hire competent people code and test.  Oh sure, they perform lip service about it all day long, but what do they really do about it?  Do they really slow down their dev cycles enough to do benchmarking and architect solutions properly?  Do they spend the money necessary to ensure that their code and servers are solid by hiring enough competent people?  Because every shop I’ve been in works their people so hard there’s no time for anything unexpected to happen.  So I think I’ve said this before, but now I’m saying it again.  It’s time to stop being children about IT.  It’s time to stop letting people who know nothing about it make the technical decisions that effect entire projects.  And it’s definitely time to stop thinking we can get by without testing things properly.  And that includes functional testing as well as performance benchmarking.

A dumb SSRS problem

I spent about an hour last night working to solve this stupid problem with a report so I thought I’d share it with you guys.  I’ve had this problem before, but for some reason I never remember it until I’ve had to work through it for a long time.

The problem is that I was trying to put a simple graph on the page and I often like to work with really simple test data.  So I chose a column that had the same value all the way down which would give me a straight line on my graph.  The problem is that I had 3 spikes.  There shouldn’t be any spikes;  the data’s all the same.

I laid the data out into a table in the report and the data was certainly all the same.  Then I ran the report query in SSMS and verified again that the column only had a single value in it.  So where were the spikes coming from?  So as it turns out the problem was with both my query and the collections.  I was collecting disk data every 10mins from a server and putting it into a table.  What I wanted was to graph today’s data by time.  And what I forgot to do was to limit the query to just today.  And since it collects the data every day at more or less the same times, there were some times from yesterday that were the same as the collection today, so SSRS was adding the values together.  And once I limited it to just today, then the spikes went away.

I consider myself lucky that I only spent an hour on this.  But it just goes to show that when data’s involved, debugging instantly gets harder because you tend to make assumptions one way or another.

Avoiding xp_cmdshell

This is gonna be a quick post, but I wanted to throw out a couple easy ways to avoid using xp_cmdshell.

Unfortunately this xp has gotten a bad rep, but only because it’s been abused.  If you use it properly there’s no reason why you can’t continue to get some good use out of it for many years to come.  Actually, I use it for my backup routines for a couple reasons… first, when I wrote them there really wasn’t a better choice.  And second, while I can architect a solution around it quite easily, it’s ridiculous to go so far out of my way to code around something that isn’t a huge problem.  The solution would be more complicated than the original problem, so why bother?  Oh yeah, and since I’m in a mixed shop there’s also that problem with having something like powershell on every box and I don’t wanna have to have a different routine for my SQL2K boxes, than I do for all the rest.  So using xp_cmdshell is still a good fit for me. 

Anyay though, there are some good reasons to avoid it when it’s not necessary (part of using it responsibly).  So we’re going to talk about 2 XPs today that will help you do just that.

First, there’s xp_FixedDrives.  This will report all the drives on your server and the amount of free space in MB.  Now, this can’t replace a well-formed PS script that can give you a lot more info, but it’s a nice start when you just need to look at drive space real quick.

Second, we have xp_create_subdir.  This handy little xp will let you create a directory.  You just pass it the path you want to create and you’re golden.  So it would look something like this:

xp_create_subdir ‘C:\Mydir\Anotherdir\theFinalDir’

And just like in DOS it’ll build the whole path for you so none of those dirs have to exist ahead of time.

Anyway, hope this finds someone well.

Keynote or Breakout Session?

From time to time you run across something that demonstrates perfectly exactly what the industry needs.  As I sit here at the PASS day-3 keynote I’m watching Dr. DeWitt talk about how the query optimizer finds query plans.  This is a very complicated topic under the covers and he’s covering some of the really complicated aspects.  He’s already explained the different types of histograms and how they work (I’m sure only at a high level), and he got into selectivity, etc.  And the thing is, he’s explaining this stuff in such simple terms and making it so easy to understand, it just hit me like a ton of bricks;  why isn’t this material being taught everywhere? 

This is one of the things I’m always talking about when I say there’s no really good training out there.  There are plenty of people out there teaching queries and tuning techniques, but nobody is bothering to break this stuff down so that people actually understand the terms and what they really mean.  Most of the time what they do is just define some terms briefly and then move on and expect everyone to be able to go back home and apply this stuff.  But without this background of how this stuff actually works and how the terms really fit together and how you get bad plans, etc, then people aren’t going to be nearly as successful as they could be.  Dr. DeWitt is clearly concerned with teaching people how to think instead of just what to think.

This is always the type of training I move towards because I find I never commit anything to memory unless I understand how it all works and fits together.  So what I’d like to see happen is for someone to put something like this together in a video series, or in a pre-con… something.  That’s a pre-con I’d actually pay for out of my own pocket.  A whole day of someone dedicated to making sure I understand how this stuff works… Hell Yeah!  But seriously, where does someone go to learn stuff like this?  Because this info isn’t out there in any human readable form.  And it’s apparently not just me either, cause he’s been brought back by overwhelming request, so I’d say the better part of the room is also screaming for this type of info.

My morning so far

Ok, aside from being kinda sick still, my morning has been filled with interesting issues.  Ahhh– the life of a prod DBA. 

It started today with space issues–again.  We’re just running out of space everywhere and it’s hard to even keep some DBs running because of the problems.  So I’m having to shrink some DBs just to keep them active. 

Now this one just gets me a little.  Had a vendor come up to me to ask for help.  He was on one of the prod servers and detached 3 of the DBs and couldn’t get them back.  Turns out he had moved the files and renamed them and expected SQL to know where they were.  He said he had problems initially and that’s why he did it, but he got stuck when he couldn’t detach them again to point SQL to the new files.  So I got that worked out for him with relatively little effort.  

Now this next one is just interesting.  I just switched our backup routine on our big system to backup to 20 files.  So the dev downstairs had a routine to restore it to a different server (I didn’t know that) and his routine was bombing.  He had re-written it to use the 20 files, but it was still failing.  Now, I’ll say that doing it the way he did doesn’t make him dumb.  In fact, I could very easily see anyone making a similar mistake because to someone who doesn’t know backup intimately, it seems like the kind of thing you should be able to do.  What he did was he was building a dynamic string to hold the file name values.  So in the string he was saying something like this: 

  

SET @BackupDatabaseFileNamePath = ‘DISK = N’ + ”” + ‘\\’ + @ProdIP + ‘\’ + LEFT(@BackupDatabaseFileNamePath,1) + ‘$’ + 

RIGHT(@BackupDatabaseFileNamePath,(LEN(@BackupDatabaseFileNamePath)-2)) + ””  

And so to that end, he was ending up with a string that looked like this: 

DISK = ‘\\127.0.0.1\K$\SQLBackups\ServerName\DBName\01of20FullPRD20101102010001.BAK’, 

And he did that 20 times, once for each file.  So now his actual restore command looked like this: 

  DECLARE @file VARCHAR(100) 

SET @file = ‘c:\SSISPkgMgmt.bak’ 

RESTORE DATABASE SSISPkgMgmt 

FROM @file 

WITH replace 

RESTORE DATABASE PRD 

FROM @BackupDatabaseFileNamePathInc 

And that looks like it should work because when you print it, you wind up with a perfect backup cmd.  The problem is that the backup cmd doesn’t work like that.  It takes params, or flags if you will, right?  And one of those flags is ‘DISK =’.  That means that the flag itself is ‘DISK =’, not a string that contains that same text.  It’s a subtle difference to us, but not to the backup cmd.  So if you want to build a string like that for either a backup or a restore, then you have to build a string that contains the entire backup cmd and not just a single part that includes the params. 

Here’s an example of something you can do though: 

DECLARE @file VARCHAR(100

SET @file = ‘c:\SSISPkgMgmt.bak’ 

RESTORE DATABASE SSISPkgMgmt 

FROM @file 

WITH replace 

And what he was trying to do was this: 

DECLARE @file VARCHAR(100)
SET @file = ‘DISK = ”c:\SSISPkgMgmt.bak”’
 
RESTORE DATABASE SSISPkgMgmt
FROM @file
WITH replace

 
If you run something like this you’ll see that SQL views it as a backup device because that follows the restore syntax.

So anyway, big fun again today too.

The stupid have fallen

This is a followup to my previous post here.

Wow, what a day.  Ok, so I just got off the phone with the tech lead for that company.  It was enlightening to say the least.  It was really a longer call than I expected or that I even wanted, but my blood pressure’s gone down a bit now.  I obviously can’t recount the entire conversation for you, but here are the highlights:

V = Vendor

M = Me

V: Hi Sean, this is Josh(not his real name) with X and I was told you wanted to talk to me.

M: No, the last thing in the world I wanna do is talk to you, but I feel in order to get our issue resolved I have no choice.

V: Yeah, Cornhole(not his real name) told me you got upset with him when he was just trying to explain to you what your issue was and then you insisted on speaking with me.

M: This is just incredible.  No I didn’t get upset with him for explaining the issue to me.  I got upset with him when I told him his explanation was flatout wrong and then he called me an idiot.  He didn’t even hint at it.  He called me an idiot outright.

V: Really?  Cause that’s not the story I got at all.  I heard that he was calmly explaining that your backups were the issue and you just exploded on him.

M: No, that’s NOT what happened at all.  That little turd actually called me stupid for not buying his assinine explanation for our issue and then I only exploded after he did that a couple times.

V:  Well what don’t you believe about it then?  Because we’ve seen that many times where backups will cause proceses to block.

M: No you haven’t.  Listen, I realize that everyone can’t spend their days studying SQL, but one simple fact is that backups don’t block processes.  Period.  I’ve been doing nothing but DBs in SQL Server for over 15yrs against TBs of data and nowhere have I ever seen a backup block a process.  The problem is that you’ve got like 3 tables with TABLOCK in that sp and that’s what causing your blocking.  Don’t you think it’s funny that it only clears up AFTER you kill that process?

V: Where did you get the code?  Those sps are encrypted.

M: Encryption only stops amateurs.  And while we’re at it, what’s this script I hear about that you’ve got to unconfuse SQL to give you the real spid for the process?

V: Where did you hear that backups don’t block processes?  It’s well-known here that it’s one of our biggest problems.

M: No, your biggest problem is your TABLOCKS.  And I heard it from the truth. 

V: What is a tablock?

M: A table-level lock.  You’re locking the entire table just to read data.

V: I don’t know about that, I’ve never seen the code because it’s encrypted.

M: Well I do and that’s your problem.  Now what about this script I kept hearing about from Cornhole?

M: Did you get my email?

V: Yeah.

M: And you can see the place in the code where you’re locking the whole table?

V: Yeah.

M: So how is it that you lock the entire table and you claim that it’s the backup blocking everything?  Haven’t you noticed that the backup only decides to block stuff when this process is running?

V: That does sound a little suspicious when you put it that way.  What can we do?

M: Let me take those TABLOCKs out of there so I can query these tables without locking everything up.

V: I just saw your email signature and I’ve never heard of an MVP before, but I just looked it up.  Are you really one of those guys?

M: Yeah.

V: Wow, so I guess you really know what you’re doing then.

M: I’ve hung a few DBs in my day, sure.

V: Do you mind if I write you again if I have any questions?

M: I suppose that depends on whether you’ll let me change that stupid sp or not.

V: Yeah, go ahead.  I’ll have to have a talk with our devs tomorrow and try to convince them to put this in our next build.  Honestly, I’m only a tech lead here because I’ve been here a few months longer than anyone else on the support team.  We get all this stuff from the devs and they won’t tell us anything.

M: So what about this script I keep hearing about.  The one that tells you how to unconfuse SQL and give you the right spid for a process?

V: That’s a script the devs have and they won’t show it to me either.  I just tell my guys something else so they won’t know I haven’t seen it.

M: Wow, that sucks man.  You do know though that a script like that doesn’t exist, right?  It’s completely bullshit.

V: Yeah, I hate this place.  I’ve been suspecting for a while now they they were lying to me, but what can I do?  I need this job.  I’m just lucky to be working.  I have no idea what I’m doing here.

M: I don’t know what to say to that.  You’re right, you are lucky to be working these days.  But pick up a couple books on your own and learn SQL.  Don’t rely on them.  There are websites, blogs, etc.  I myself run a site.

So in short, I really started to feel for this guy.  He ended the call apologizing 100x for the grief they’ve caused me and he’s going to go out right away and start watching my vids and trying to learn more.  He asked if he could write me with questions, and I said, of course dude, anytime.  That’s what I do.

And y, he sounded about 22 as well.  Turns out I was right.  I asked and he was 21 and this was only his 2nd job and they brought him in from nothing and “trained” him.  I’d say he’s got some work to do.  But I personally consider this exchange a success.

Powershell to recycle error log

Here’s a really good one. I noticed that very few of my boxes had a job to recycle the sql error logs. This is important because unless specifically done, the error log will only recycle when the service is burped in some way. For those of you who aren’t up on the lingo, recycling the error log is when you close down one log and create a new one. The error logs are just txt files kept on the server. So recycling creates a new physical txt file. And the reason you care about this is because some servers don’t get bounced very often at all and the files can get quite large. I have one server with a 40GB error log. And when I try to open it, it sits there for like 2hrs and then finally dies cause there aren’t enough system resources to deal with a file that big. So you can see why you want to keep the size of the files down.

And like I said, the error log only recycles when you burp the service or do it manually. And the way you do it manually is by running this code:

master.dbo.sp_cycle_errorlog

But doing it manually isn’t good enough because it has to be done on a regular basis. I prefer to recycle my logs every day. It’s a good generic schedule and honestly I don’t ever remember needing anything more granular. OK, so a new log every day is what we’re after. That means we need to put this code in a job and have it run every day at the same time.

So ok, that set the stage for what we’re wanting to do here. Now we can get into what we actually wanna talk about… and that’s how we push our new job to all of our boxes. Now, I’m going to do this a specific way and I’m doing it like this for a reason. I’ll explain once I’m done.

Here are the steps I want you to take to prep for this venture.

1. Create a job to occur every day (Midnight is preferable) and include the above code in the job step.
2. Be sure to set the job owner to sa.
3. Script the job and save it as a file somewhere.
4. You’ll have to script the job for different versions of SQL. SQL2K doesn’t have job schedules so the script blows up. So at least have one for SQL2k and below, and one for 2K5 and above.

 Now here’s the powershell script you’ll run.

$SourceSQLScript1 = “\\Server01\F$\SQLServerDBA\Scripts\Collector\SQLQueries\ServerListAllSQL2000.txt”;

$SqlCmd1 = Invoke-Sqlcmd -ServerInstance $Server -Database $StatsDB -inputfile $SourceSQLScript1

$SqlCmd1 | % {

[System.String]$ServerName = $_.ServerName;

$ServerName;

invoke-sqlcmd -ServerInstance $ServerName -Database “MSDB” -InputFile “\\Server01\F$\SQLServerDBA\Scripts\ServerConfig\RecycleSQLErrorLogJOB-SQL2K.sql” -SuppressProviderContextWarning

}

Now I’m going to explain what’s going on here and why:

1. $SourceSQLScript1 — This is the var that holds the location of the txt file where I store the query for the list of servers to run against. Inside the file looks like this: select ServerID as InstanceID, ServerName from dbo.Servers (nolock) where IsSQL = 1 and SQLVersion = 2000. The reason I do it this way is because if I ever need to change the query, all of the scripts that call the query will automatically pick up the change. So putting the query in a file is a good thing.

2. $SqlCmd1 — here’s where I’m actually running the query in the step above and setting the results to this var. Now I’ve got an array of ServerNames I can cycle through to run my script on.

3. [System.String]$ServerName = $_.ServerName; — Here I’m just setting the current cursor iteration of the ServerName equal to a var ($ServerName). It just makes it easier to read and if something comes up where I need to make a minor adjustment to how the ServerName looks, then I can mess with the var and not have to touch the rest of the script. Again, reducing effects to the script if I need to make changes to the value. So it’s like later on down the line if I discover that I need to put [] around it or something like that. This could also be done in the query file, but it’s nice to have the option here as well should I need something special for a single script.

4. $ServerName; — Here I’m just printing out the ServerName so I can watch my script cycle through the boxes. It’s my way of knowing what’s going on. I hate that blinking cursor.

5. invoke-sqlcmd — This is where the real magic happens. I’m connecting to each box and running the job script I saved above.

Here are some finer points of the logic for why I did things the way I did:

1. I put the server list query into a var. This is to allow you (or me) to get the server list any way we like w/o disturbing the rest of the query. You could have a hardcoded list of serverNames, or have them in a file… whatever you like. Here’s a really good example.

Instead of this:
$SqlCmd1 = Invoke-Sqlcmd -ServerInstance $Server -Database $StatsDB -inputfile $SourceSQLScript1
You can have this:
$SqlCmd1 = “Server1”, “Server2”, “Server3”, “Server4”, “Server5”

 And the rest of the script stays the same because the cursor is run off of the var. It doesn’t care how that var gets populated.

 2. I chose to run this separately for each edition of SQL. So I ran it once for SQL2K, once for SQL2K5, etc. That’s because like I said there are differences in the job scripts for the different versions. I could have done it all in a single script, but I really don’t like coding that logic in there and it increases the size of the script because now I have to check for the version of SQL and make a decision before I can create the job. I have to point it to the right .sql script. That’s too much work when all I have to do is change the query file and the script file between runs. And this way, I can run them concurrently for each version simply by saving the script under different names and kicking them off at the same time.

 3. This one is my favorite because it’s what makes this so flexible. I chose to put the sql logic itself into a file and just call that file. The reason I did this is because not only is the sql logic too long for this type of script, but if I’ve got anything else I need to run against all my boxes, all I have to do is change the sql script I’m calling. I don’t have to make any changes at all to the rest of the script. So if I want to create or delete any specific users off of a group of boxes, all I have to do is put that code in a script and save it, and then point this script to it. Or anything else, right? This is really the bread and butter for powershell and multi-server mgmt.

So guys, take this boiler plate script and methodology and apply it to your own stuff. I’ve taken a lot of stuff out of the script for this blog because my scripts are part of a much larger solution I’ve developed and it’s not relevant to what you’ll be doing. But the script works as-is and you can add whatever you want to it. And now you’ve got a generic way to push T-SQL out to your boxes, and a generic way to populate the server list it goes against. What else do you need really?