Category Archives: Time-savers

Why is adopting Powershell so hard?

One of the biggest battles I fight when I teach PS or show it to different companies is adoption.  DBAs and managers just don’t get why they should be using PS to manage their environments.  And the reason so many of them are slow to adopt is because the right message is very hard to come by.  In fact, I see the wrong message all the time.  This was driven home again just now as I received my “SQL Server Pro UPDATE” email (read it here).  The editorial was about why DBAs should care about PS.  It was written by Michael Otey (pronounced Otee) and I’m sorry to say that as the Sr. Technical Director he’s completely missed the mark.  He’s managed to relegate the most important management tool DBAs have to a mere sideline player that allows you to combine SQL work with OS-level work.  Dude, you’ve completely missed the boat!!!

I’ve said this a number of times and I’ll say it again here.  Let’s get the right message out about PS.  Let’s start telling everyone that truth about it and that truth is that you’re going to get left behind if you don’t know PS and you’re not going to be able to provide true value to your company.

I’m a working production DBA in a very large shop.  And I can tell you with absolute authority that PS is more than just a way to give you some OS-level functionality in your SQL processes.  PS is vital to the management of my shop.  The primary role of PS is to allow you to manage hundreds of objects at once with just a few lines of code.  As you’ve seen in my videos again and again, and in my classes again and again, you just can’t manage any decently sized environment by hand.  It’s just too much and it’s accident-prone.

Now, a large environment doesn’t have to mean hundreds of servers.  Oh no, it can mean dozens of DBs on a single server.  It can mean dozens of schemas in a single DB, or dozens of tables, SPs, views, etc.  Or it can even mean dozens of jobs or logins.  It doesn’t matter what you’re cycling through.  The point is that PS is much more efficient at managing these objects than T-SQL will ever be.  There may be a built-in SP for running a T-SQL statement against all the DBs on your server, and there’s even one for running something on all the tables I believe.  But there’s nothing for all the jobs, or SPs, or views, or logins, or credentials, or any of the other many objects you can manage natively with PS.  With just a few characters of code you can manage all the tables in your DB.  And with just a few more characters you can do that same operation against all the DBs on your server.  And with yet just a few more characters of code, you can do that same operation against all the DBs across as many servers as you like.  THAT’S THE POWER OF POWERSHELL.  It’s more than just a way to manage files while you’re doing some T-SQL operations.

A perfect example is adding user accounts to DBs on several boxes.  We quite often get contractors who come in and do specific work for us and they need to be given rights to all of the environments for a given application.  So this one we have has 500+ DBs on each server.  There are 7 prod servers, 3 QA servers, and 5 dev boxes.  All of them have 500+ DBs on them.  So that’s 15 servers, and at least 7500 DBs that I have to add these users to.  And using windows groups will only take you so far because when the next contractor comes in he may need different rights so you still find yourself modifying the perms regardless.  The point is I go through this exercise about once every 2mos and PS has made it a trivial matter.  And that’s just ONE example.

The time has come to stop downplaying the role of Powershell in SQL Server.  DBAs need to be running to it.  We’re being asked to do more and more with less time and fewer resources.  And there are too many courses being taught, too many SQLSaturday sessions, too many videos (FREE on MidnightDBA.com) for you to remain in the dark any longer.  I’ve managed 900+ servers by myself using mainly PS.  I’d say it goes quite a bit deeper than being a helper to keep you from using xp_cmdshell.

Hey, how about some new Powershell vids?

I’ve been a busy little guy this week.  I’ve posted 4 new videos.

The first one is on cycling the SQL error log from Powershell.  Well, not really, you’re really deploying the solution to other boxes using Powershell.
You can see it here:
http://midnightdba.itbookworm.com/VidPages/PowershellCycleErrorLog/PowershellCycleErrorLog.aspx

The next one is about reading the SQL error log from Powershell.

You can see it here:  http://midnightdba.itbookworm.com/VidPages/PowershellReadErrorLogs/PowershellReadErrorLogs.aspx

The next 2 are a short series on changing DB permissions in Powershell.
You can see them here:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions/PowershellChangeSQLPermissions.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions2/PowershellChangeSQLPermissions2.aspx

Red-Gate is Badass!!

Ok, so the other night I uninstalled my R2 install on my box and put on the new SQL2012.  I know upgrade exists, but I just wanted the thing on there and didn’t feel like messing with anything stupid going wrong.  So anyway, I was half way through my uninstall when I remembered, Oh Crap, I’m gonna lose SQLPrompt cause SQL2012 just came out.  I was pretty upset because I use that tool all the time.

So While Denali was installing, I went out to Red-Gate to see if they had it listed anywhere when SQLPrompt would support Denali… and you know what?  It already does.  I installed the newest version and it works beautifully in SQL2012.  And that’s not all.  They’ve got this kewl new thing they’re doing where they let you see beta features.  They call it Experimental Features and it’s at the bottom of the SQL Prompt 5 menu.  Here’s what it looks like.

These are beta features and you’re welcome to install them and try them out as you wish.  Personally I love this because I quite often want stuff like this and even if there are a couple bugs it’s still better than not having it at all.

So Red-Gate, I salute you for having this ready in time for the release and for doing such a good job with presenting us with these new features.  I want to have your babies.

Changing Job Step Properties in Powershell

I read a blog today from my good friend @SirSQL (Nic Cain) where he was talking about how to change retry attempts on all the jobs on a server. Well, technically it’s the job steps that have the retry attempts, not the jobs themselves. And whenever I see something like this, my fabulously huge MCM brain tries to turn it into Powershell. So I put my fingers to the keys and like 10mins later I had a workable script. And while Nic’s solution is perfectly good T-SQL, it’s another one of those things that highlights the power of the shell because I only need 3 lines of code to do it.

The easiest way to do this is through SSMS. Just start PS from the Jobs node.

And that’ll give you a blank SQLPS window that’s already in the jobs node.

Since we’re in SQLPS we don’t need to load the assembly, but I’ll do it anyway to show you how cause you might not be in SQLPS when you do it. It won’t hurt anything to load it again. But that’s one of the things that SQLPS does for you; It loads these assemblies.

Here’s how you load the assembly and set a variable to the JobStep object type.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

Now we have our new $js var (stands for JobStep) we can fill it with the steps of all the jobs on the server. However, first, let’s take a look at the members of our new var.

$js | gm

Now you’ll get a good listing of all the members so you’ll know what you can and can’t do with it.

Towards the bottom you’ll see the property we’re interested in here: RetryAttempts. Now we just have to go through all the steps and change the retries to what we want. Here I’ll change it to 5 just because it’s a nice round number. You’ll be surprised how easy it is to do this. I’ll go ahead and tack it onto the partial script above and this will become our entire script.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Ok, that’s all we need to do to change the RetryAttempts property for all the jobs on the server. But we’re not done talking about this… not even by a longshot.

First, in line 4 notice I call the EnumJobStepsByID() method on each item of the dir. This is how I populate the $js var with all of the job steps. What this line says is list all of the jobs (using dir) and then for each one, get a list of its steps and put it in $js.

Line 5 runs through each of the job steps in $js and actually performs the work of setting the RetryAttempts to our new value. And remember, jobs have an Alter() method, and typically whenever something in PS has an alter method it likes you to use it. if you don’t the changes will take effect in your PS session only and will not be pushed to the server. So call the Alter() method.

Now, I know what you’re saying… how would you do it for only some of the jobs? Because so far we’ve assumed that you want to apply the changes to every job on the server. Well, there are 2 ways to do that and they’re incredibly easy.

First, you can limit the data from the dir cmd in line 4. It could look like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | ?{$_.Name -match "maint"} | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Take note of the new line 4. I’ve just added a where clause to the pipeline so now only jobs with the word “maint” in their names will be in the list.
The 2nd way is just as easy, but you do it at the job step level. if you remember from above when we looked at the methods for $js there was a property “Parent”. This is the parent job name for the step. So all you have to do is add the where clause to the $js instead and you’ll achieve the same thing.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.EnumJobStepsById()}
$js | ?{$_.Parent -match "maint"} | %{$_.RetryAttempts = 5;$_.alter()}

Now the new line 5 reflects our easy change.

There are so many uses for this code it’s incredible. There are plenty of properties to change and so many ways to limit the result set. Let’s say you have a maint routine on all of your servers and you want to alter the command that the 1st job step runs. That’s very easy. Instead of changing the RetryAttempts property, just change it to the Command property like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | ?{$_.Parent -match "maint"} | %{$_.Command = "My new code.";$_.alter()}

There’s nothing here Nic couldn’t change his script to do pretty easily, but this is much shorter and easier to read I think. It also has one huge advantage… I can run it on multiple servers without much effort. I’m not going to go down that road in this blog because I’ve posted many solutions with that code in it already so it wouldn’t be anything for you to add that to this script.

So anyway, thanks Nic for letting me use you as the base for this post. Your solution is fine, I just prefer the look and feel of 4 lines of code.

I’ve also got a companion video for this post:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeJobStepProperties/PowershellChangeJobStepProperties.aspx

Mystery of the missing files

About 3mos ago I had my work box rebuilt.  The encryption software glitched and it just wouldn’t boot.  So the helpdesk girl was able to slave the drive to another box and copy my files to a NAS.  I looked and all the folders were where they should be.  Great.

She then rebuilt the box and when I got it back I copied all the folders back.  I let it rest and I was very pleased that I was able to preserve all my scripts, etc.  A few days passed and I needed a script, so I went to that folder and it was empty.  WTx dude?  I looked and everything was still out on the NAS so I tried to get it off of there but they were missing there too.    OK, now I’m upset.  I’ve just lost most everything I cared about on my box.  It took me a while to accept it but I finally did.

Well, today (this is now several wks later) I finally went to replace one of the things I lost by downloading it again.  When I went to save it though, it said that it already existed and did I want to replace it.  See, I saved it to the same folder as before because I’m OCD that way.  Needless to say it took me about 2secs to figure out what had happened.  Normally I setup my box the way I want right away, but this time I got busy and just forgot.  And as it turns out my files were there, just hidden.  And I didn’t have ‘show hidden files’ checked in folder properties.  Now that I know the problem, how do I fix it because there’s a lot of files and folders to go through.  The answer of course is Powershell.  I got it right on the first try and the code is so small, and so compact, and so easy to come up with, it just screams PS.  And it is the perfect example of what PS is all about.  So here’s the query I used… enjoy

dir -recurse -force | %{$_.attributes = 'Archive'}

Now, here’s a brief explanation of a couple of the flags.

-recurse — means to run it against all the subfolders too.
-force — this is a good one because it tells PS to show you all the files.  If they’re hidden you won’t see them unless you use this flag.

So that’s it really.  A single line of code to unhide hundreds of files in dozens of folders and subfolders.  And since this is my first day back after vacation, I’d say I’m starting it off on a decent note.

Comparing DB objects in Powershell

I got a request today to compare 2 DBs and see if they’ve got all the same tables. It seemed easy enough since I’ve got the redgate Schema Compare tool, but it actually isn’t. See, for some reason Schema Compare doesn’t work very well on my box, and it’s a big overkill for what I needed. All I really needed was to see if the same tables existed. I wasn’t interested in comparing their actual definitions. So I thought I’d give Powershell a try.

I started by picking my cmdlet, which was going to be Compare-Object. I use its alias ‘Diff’. Now, this cmdlet isn’t typically used for comparing DB objects, but there’s nothing that says it can’t be. After giving it a little thought I decided to write the following code to get my comparison. Oh, and I’m just using sample DBs for my examples.

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property name

OK, that’s the base-level syntax you need to do a comparison on table names. That code will give you the following results.

The left arrows indicate tables that are only found in the reference object ($a), while the right arrows indicate objects that are only in the difference object ($b). And that’s really it. The only thing you need to know is that while the differences are outlined, objects that appear in both sets aren’t listed. So your best bet would be to have an empty result set because that would mean that all of the objects exist in both places.

Now, keep in mind also that I only matched on the name property. So there are opportunities for this to go wrong because I’m not taking schema into account. If I wanted to take schema into account though, I would just add it to the property list like this:

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property schema, name

This time you notice that the Address table shows up twice in different schemas. And in the first result set the Address table wasn’t there, so that means that the Address table was considered a match in both DBs because we weren’t taking schema into account.

However, there’s one final piece that you may find useful. What if you want to see all the objects, even the ones that match, you can throw in the -IncludeEqual flag. I’ll run the original Diff to show you that now the Address table shows backup and is indeed considered equal when you compare it just by name.

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property name -IncludeEqual

Now that we’ve got all the plumbing worked out you can apply this to SPs, functions, or whatever you like.
It won’t give you an actual schema compare against these objects, but it’s really helpful in quite a few instances to be able to just see which objects exist.

That’s it for this time.

Change your process

This is an excellent example of how you need to be flexible with your processes, even when you’re in the middle of a project.

We started a project to move a DB to a new set of disks. Since the files are large, we probably weren’t going to be able to fit it into a single downtime so we were going to just move one file at a time over the next few weeks until they were all done. Well, due to circumstances out of our control, now they all have to be done at the same time. The problem is that now the file copies are going to take in excess of 6hrs, which is way longer than any downtime they would give us. I know, right? Don’t worry, I’ve got big problems with them forcing us to do operations in a large chunk like this, and then saying we can’t have time to do it. So we were doing our test last week and it did indeed take about 6hrs to copy all the files. And I don’t know why it didn’t hit me before, but why not change the process? The copy process was there because we were going to piecemeal the operation over several weeks, but since that’s gone, then maybe it’s time to come up with a new strategy.

So that’s what I did. My new strategy is an even simpler backup/restore op. All I have to do when I restore is map the files to their new locations and I’m golden. So so far it’s 6hrs to copy, and I know I can backup in 20-25mins. So my restore I’m guessing will be about 30mins (give or take).

Of course, the backup/restore won’t perform that well on its own. You have to tune it so it’ll use the resources to its advantage. This is where knowing how to tune your backups can come into play. And while I often say that tuning backups is quite often frustrating because you can’t use a lot of your resources because you’ve still gotta leave room for user processing on the box, this is one of those times that you can crank it all the way up. Everyone’s going to be offline for this op, so the box is completely mine. I can use every ounce of RAM and CPU on the server if I want. And that’s what I’m going to do. If you’re interested in how to go about tuning your backups, you can look at my recent SQLSAT session on the topic. I did it for SQL #90 in OKC. You can find the vid page for it here: http://midnightdba.itbookworm.com/Events.aspx

So anyway, the point is that just because you’ve come up with a way to do something, don’t set it in stone. If the scope changes in such a way that you can now do it a better way, then don’t be afraid to say “Stop, I’ve got a better way”. You may get some pushback from your peers because the project plan is already done and we need to just move forward with what we’ve got, but that’s when you need to push back and say no, this process was developed for a different circimstance and now it’s a different scenario completely. So this is no longer a viable method.

Good practice saves the day

Ok, a funny thing happened at work today. I had to write a script to move a DB’s files to another drive. Sounds easy enough, huh? Well in theory it was easy, but leave it to me to mess it up. Now, this was just a dry-run so I don’t feel too bad about it, and it gives me a chance to test my process. So, here’s what happened.

My process was this:

1. Run move stmts for all files.
2. Take DB offline.
3. Run PS script that moves the files.

The PS script selects from sys.master_files to get a list of the files to move. Here’s the problem… I’ve already run the move statements so SQL thinks they’re already in the new location. Therefore, the PS script can’t move them from their current location. However, good practice saves the day.

As a habit I always save a copy of sys.master_files to another table in case something happens. Now it’s a simple matter of changing my PS script from selecting from sys.master_files to selecting from sys.master_filesSEAN. Now I’m back in business. That’s the thing I find about good practices; they help you in ways you can’t even predict most of the time.

Here are some other general steps I take to protect against one thing or another:
1. I always try to group like objects in the same schema. It’s just a good idea.
2. I always copy the system DBs when upgrading or patching SQL. Sure I can restore from backup, and I have those too, but if something big happens and I have to completely re-install from scratch it’s much easier to just drop the old system DB files out there than it is to restore from backup.
3. Whenever I’m done working on an important prod system and it’s time to move on to something else, I always close all prod windows and even disconnect from it in object explorer. I don’t want any chance there’s a way to open a new connection to it and do something dumb.

I’d love to give you guys a huge list, and I know there are more, I just can’t think of any right this second.

An interesting log truncation case

Here’s a good one for you guys. I got a call from one of my DBAs today that they’re having trouble with the logs on one server not truncating. They’ve brought the server space to a critical level, and it needs to be fixed. The problem is that the DBA had looked at the log backups and they were fine. The log backup job was running just fine, and there were no active transactions.

I asked if there were any DBs being replicated… no.
OK, I’ll look into it.

So I connect and start my investigation where I always do for log truncation problems.
SELECT name, log_reuse_wait_desc from sys.databases

This tells me exactly what the log is waiting for before it can truncate. It’s really a matter what what your goal is. Some people love the troubleshooting process so much that they want to drag it out as long as they can. So they hunt around looking at different things to try to find what the problem is. They gather evidence and try different things until the problem is solved. I’m not that way though. I want to find the issue ASAP and get on to other things. So if you really enjoy the hunt then keep doing what you’re doing. But if you want to solve the problem quickly, then use the above query and get on with your life.

So anyway, the log was waiting on a log backup. I then checked the log backup job and it had been running just fine all day. So ok, the log is being backed up, but SQL still thinks that it’s waiting on a log backup before it can truncate the log.

This is where a knowledge of SQL can come in. At this point there are only 2 things that could architect this situation. Because all the DBs on the server are in the same boat with their logs filling up. So either the backup isn’t actually running, or it’s running with the copy_only flag.

A quick look at the job properties told me exactly what I needed to know. Looking at the execution time of the different runs, the job finishes in about 10secs. That seems a little fast for me on a server that has like 200 DBs on it. So looking back at the history 2 days ago the job was taking 9mins.

At this point, I knew exactly what the problem was. I looked in the SP that runs the backup, and the line that actually runs the backup had been commented out. Someone was trying to make a simple change and commented out the wrong line in the SP.

Look, I’m not smarter than your average DBA, and I’m not necessarily better educated. What I do however, is follow a predictable pattern of troubleshooting, and actually pay attention to the evidence. Sometimes the evidence isn’t clear and you have to make guesses, but most of the time it is.
I understand that the thrill of the hunt keeps you going, but some things are easy enough that you should just get them done. Why a log won’t truncate is such an easy thing to diagnose it should just be commonplace for you. Search somewhere else for your troubleshooting fix.

Partitioning and the free lunch

Quite often there’s no benefit to a feature if you’re not using it, but this time that’s not the case. Even if you’re not using partitioning you can still take advantage of it in a really cool way.

Let’s say you’ve got a table that you load every night, and you want to move the current data to a new table and truncate the current one. Here’s how you’d do something like that today.

1. Select * into T2 from T1
2. Insert into T2 select * from T1
3. Rename T1 to T2. Then recreate T1.

But let’s face it, none of those choices are really ideal as they take time. Though of those I would choose 1, 3, 2 in order. I don’t like renaming objects because the new object now has a new ID, and it’s hard to track them over time if they keep changing IDs. So those are your choices if you’re not taking advantage of partitioning.

If you ARE taking advantage of partitioning however, you’ve got a different choice. Now you can switch the data out of the current table and into the 2nd w/o any data actually moving at all. And the best part is that you don’t even have to have partitioned your tables at all. That’s right… your tables are automatically partitioned as a single-partition table so you’re using partitioning whether you know it or not. So that means that in order to truncate T1 and move its data to T2 you only have to do this:

alter table T1
switch partition 1 to T2

Now there are some caveats, but as with most things it’s not too bad. The biggest 3 are:
1. The table structures have to be identical.
2. They have to be on the same filegroup. So the partition you’re switching out and the one you’re switching it into have to be on the same filegroup.
3. The target table/partition has to be empty.

There are some others that are advised, but those 3 are the only show STOPPERS. And once you’ve got the data into the new table, you can then move it to a new filegroup, or do whatever with it you like.

So anyway, it’s a wicked cool method for getting rid of your current data even though you’re not officially partitioning your table.