Tag Archives: Powershell

Hey, how about some new Powershell vids?

I’ve been a busy little guy this week.  I’ve posted 4 new videos.

The first one is on cycling the SQL error log from Powershell.  Well, not really, you’re really deploying the solution to other boxes using Powershell.
You can see it here:
http://midnightdba.itbookworm.com/VidPages/PowershellCycleErrorLog/PowershellCycleErrorLog.aspx

The next one is about reading the SQL error log from Powershell.

You can see it here:  http://midnightdba.itbookworm.com/VidPages/PowershellReadErrorLogs/PowershellReadErrorLogs.aspx

The next 2 are a short series on changing DB permissions in Powershell.
You can see them here:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions/PowershellChangeSQLPermissions.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions2/PowershellChangeSQLPermissions2.aspx

Changing Job Step Properties in Powershell

I read a blog today from my good friend @SirSQL (Nic Cain) where he was talking about how to change retry attempts on all the jobs on a server. Well, technically it’s the job steps that have the retry attempts, not the jobs themselves. And whenever I see something like this, my fabulously huge MCM brain tries to turn it into Powershell. So I put my fingers to the keys and like 10mins later I had a workable script. And while Nic’s solution is perfectly good T-SQL, it’s another one of those things that highlights the power of the shell because I only need 3 lines of code to do it.

The easiest way to do this is through SSMS. Just start PS from the Jobs node.

And that’ll give you a blank SQLPS window that’s already in the jobs node.

Since we’re in SQLPS we don’t need to load the assembly, but I’ll do it anyway to show you how cause you might not be in SQLPS when you do it. It won’t hurt anything to load it again. But that’s one of the things that SQLPS does for you; It loads these assemblies.

Here’s how you load the assembly and set a variable to the JobStep object type.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

Now we have our new $js var (stands for JobStep) we can fill it with the steps of all the jobs on the server. However, first, let’s take a look at the members of our new var.

$js | gm

Now you’ll get a good listing of all the members so you’ll know what you can and can’t do with it.

Towards the bottom you’ll see the property we’re interested in here: RetryAttempts. Now we just have to go through all the steps and change the retries to what we want. Here I’ll change it to 5 just because it’s a nice round number. You’ll be surprised how easy it is to do this. I’ll go ahead and tack it onto the partial script above and this will become our entire script.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Ok, that’s all we need to do to change the RetryAttempts property for all the jobs on the server. But we’re not done talking about this… not even by a longshot.

First, in line 4 notice I call the EnumJobStepsByID() method on each item of the dir. This is how I populate the $js var with all of the job steps. What this line says is list all of the jobs (using dir) and then for each one, get a list of its steps and put it in $js.

Line 5 runs through each of the job steps in $js and actually performs the work of setting the RetryAttempts to our new value. And remember, jobs have an Alter() method, and typically whenever something in PS has an alter method it likes you to use it. if you don’t the changes will take effect in your PS session only and will not be pushed to the server. So call the Alter() method.

Now, I know what you’re saying… how would you do it for only some of the jobs? Because so far we’ve assumed that you want to apply the changes to every job on the server. Well, there are 2 ways to do that and they’re incredibly easy.

First, you can limit the data from the dir cmd in line 4. It could look like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | ?{$_.Name -match "maint"} | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Take note of the new line 4. I’ve just added a where clause to the pipeline so now only jobs with the word “maint” in their names will be in the list.
The 2nd way is just as easy, but you do it at the job step level. if you remember from above when we looked at the methods for $js there was a property “Parent”. This is the parent job name for the step. So all you have to do is add the where clause to the $js instead and you’ll achieve the same thing.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.EnumJobStepsById()}
$js | ?{$_.Parent -match "maint"} | %{$_.RetryAttempts = 5;$_.alter()}

Now the new line 5 reflects our easy change.

There are so many uses for this code it’s incredible. There are plenty of properties to change and so many ways to limit the result set. Let’s say you have a maint routine on all of your servers and you want to alter the command that the 1st job step runs. That’s very easy. Instead of changing the RetryAttempts property, just change it to the Command property like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | ?{$_.Parent -match "maint"} | %{$_.Command = "My new code.";$_.alter()}

There’s nothing here Nic couldn’t change his script to do pretty easily, but this is much shorter and easier to read I think. It also has one huge advantage… I can run it on multiple servers without much effort. I’m not going to go down that road in this blog because I’ve posted many solutions with that code in it already so it wouldn’t be anything for you to add that to this script.

So anyway, thanks Nic for letting me use you as the base for this post. Your solution is fine, I just prefer the look and feel of 4 lines of code.

I’ve also got a companion video for this post:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeJobStepProperties/PowershellChangeJobStepProperties.aspx

Mystery of the missing files

About 3mos ago I had my work box rebuilt.  The encryption software glitched and it just wouldn’t boot.  So the helpdesk girl was able to slave the drive to another box and copy my files to a NAS.  I looked and all the folders were where they should be.  Great.

She then rebuilt the box and when I got it back I copied all the folders back.  I let it rest and I was very pleased that I was able to preserve all my scripts, etc.  A few days passed and I needed a script, so I went to that folder and it was empty.  WTx dude?  I looked and everything was still out on the NAS so I tried to get it off of there but they were missing there too.    OK, now I’m upset.  I’ve just lost most everything I cared about on my box.  It took me a while to accept it but I finally did.

Well, today (this is now several wks later) I finally went to replace one of the things I lost by downloading it again.  When I went to save it though, it said that it already existed and did I want to replace it.  See, I saved it to the same folder as before because I’m OCD that way.  Needless to say it took me about 2secs to figure out what had happened.  Normally I setup my box the way I want right away, but this time I got busy and just forgot.  And as it turns out my files were there, just hidden.  And I didn’t have ‘show hidden files’ checked in folder properties.  Now that I know the problem, how do I fix it because there’s a lot of files and folders to go through.  The answer of course is Powershell.  I got it right on the first try and the code is so small, and so compact, and so easy to come up with, it just screams PS.  And it is the perfect example of what PS is all about.  So here’s the query I used… enjoy

dir -recurse -force | %{$_.attributes = 'Archive'}

Now, here’s a brief explanation of a couple of the flags.

-recurse — means to run it against all the subfolders too.
-force — this is a good one because it tells PS to show you all the files.  If they’re hidden you won’t see them unless you use this flag.

So that’s it really.  A single line of code to unhide hundreds of files in dozens of folders and subfolders.  And since this is my first day back after vacation, I’d say I’m starting it off on a decent note.

Comparing DB objects in Powershell

I got a request today to compare 2 DBs and see if they’ve got all the same tables. It seemed easy enough since I’ve got the redgate Schema Compare tool, but it actually isn’t. See, for some reason Schema Compare doesn’t work very well on my box, and it’s a big overkill for what I needed. All I really needed was to see if the same tables existed. I wasn’t interested in comparing their actual definitions. So I thought I’d give Powershell a try.

I started by picking my cmdlet, which was going to be Compare-Object. I use its alias ‘Diff’. Now, this cmdlet isn’t typically used for comparing DB objects, but there’s nothing that says it can’t be. After giving it a little thought I decided to write the following code to get my comparison. Oh, and I’m just using sample DBs for my examples.

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property name

OK, that’s the base-level syntax you need to do a comparison on table names. That code will give you the following results.

The left arrows indicate tables that are only found in the reference object ($a), while the right arrows indicate objects that are only in the difference object ($b). And that’s really it. The only thing you need to know is that while the differences are outlined, objects that appear in both sets aren’t listed. So your best bet would be to have an empty result set because that would mean that all of the objects exist in both places.

Now, keep in mind also that I only matched on the name property. So there are opportunities for this to go wrong because I’m not taking schema into account. If I wanted to take schema into account though, I would just add it to the property list like this:

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property schema, name

This time you notice that the Address table shows up twice in different schemas. And in the first result set the Address table wasn’t there, so that means that the Address table was considered a match in both DBs because we weren’t taking schema into account.

However, there’s one final piece that you may find useful. What if you want to see all the objects, even the ones that match, you can throw in the -IncludeEqual flag. I’ll run the original Diff to show you that now the Address table shows backup and is indeed considered equal when you compare it just by name.

$a = (cd sqlserver:\sql\localhost\default\databases\AdventureWorks\tables).dir;
$b = (cd sqlserver:\sql\localhost\default\databases\AdventureWorksLT\tables).dir;

Diff -reference ($a) -difference ($b) -property name -IncludeEqual

Now that we’ve got all the plumbing worked out you can apply this to SPs, functions, or whatever you like.
It won’t give you an actual schema compare against these objects, but it’s really helpful in quite a few instances to be able to just see which objects exist.

That’s it for this time.

Finding SQL VMs: Part 2

Ok, I’ve got a couple other methods so I thought I’d pass them along to you guys. These are methods passed to me by users.

The first one is: xp_readerrorlog 0,1,’Manufacturer’
I tried this on a number of my boxes with very mixed results. It didn’t have anything that specifically told me the VM wan’t virtual, and it didn’t work on any of my VMWare boxes. It appeared to work well enough with Hyper-V though.

The 2nd is: Win32_BaseBoard
This WMI class worked just fine for finding Hyper-V instances, but didn’t prove fruitful at all on VMWare. And again, there wasn’t really way way to tell whether it was physical or just not being reported correctly.

So there you have it. We’ve got 2 more methods, that I haven’t had much luck with, but maybe you’ll have more luck if you’ve got the right environment. So if you really don’t like the one I gave you yesterday, then pick one of these.

Is your SQL Server in a VM?

It seems that one of the key pieces of info you need when looking into performance problems is whether your SQL box is sitting in a VM or not. This could greatly expand the scope of your troubleshooting. The problem is of course, that Windows has no idea it’s in a VM so there’s nothing really to query inside Windows itself to get this info. Oh, if you’re lucky you can run through the software installed and look for the VM tools installed, but that’s really hit or miss, and not as easy as I’d like.

I’ll show you the way I do it though and you can either adopt it or not. Frankly, I’ll be really interested in seeing if anyone leaves a comment with a better way, cause I just stumbled on this about a yr ago so I don’t know how it’s usually done.

Anyway, we’re going to turn to Powershell for the answer… there’s a shock, huh?
Here’s the simple code it takes to query for a VM instance:

gwmi Win32_BIOS -ComputerName "Server1"

That’s pretty easy, right?
Now for the results. I’ve only got 2 VM packages to test with so if you’ve got something other than VMWare or Hyper-V then you’ll have to devise your own test.

For VMWare, the results will look like this:

Notice the SerialNumber has “VMware” in it? That’s what you’re looking for.
Now, to filter out the boxes in your LAN that are VMs, you have but to filter based on that criteria. It would look something like this:

$a = "Server1", "Server2", "Server3"

$a | %{$bios = gwmi Win32_BIOS -ComputerName $_; If($bios.SerialNumber -match "VMware") {"$_ is Virtual"} 

Ok, so it’s not a tremendously useful example, but it shows you how the simple filter would work.

Now let’s move on to Hyper-V. That output would look like this:

This time notice that it doesn’t say “Hyper-V”, and it doesn’t even say anything special in the SerialNumber. This time you have to look in the Version col. This col has “VRTUAL” in it. It would be nice if it said Hyper-V, but it doesn’t. So now you can apply the same filter above to find your VMs if you’re in a Hyper-V shop.

$a = "Server1", "Server2", "Server3"

$a | %{$bios = gwmi Win32_BIOS -ComputerName $_; If($bios.SerialNumber -match "VRTUAL") {"$_ is Virtual"} 

Now, let’s put them together. If you’re in a mixed shop like I am, you’ll want to know what kind of VM your SQL box is in so you know which VM admin you need to work with when you have problems. So this last script will tell you which VM your boxes are in. Now you can query all of your servers and put that info into a table so you have it always. And don’t forget to run it periodically to make sure nothing’s changed. Because we’re always moving boxes into VMs, and even taking a few back out. So keeping up with the latest can be useful. I run mine once a week.

$a = "Server1", "Server2", "Server3"

$a | %{

$bios = gwmi Win32_BIOS -ComputerName $_; 

if($bios.Version -match "VRTUAL") {"$_ is Hyper-V"}
if($bios.SerialNumber -match "VMware") {"$_ is VMWare"}
if($bios.Version -notmatch "VRTUAL" -and $bios.SerialNumber -notmatch "VMware") {"$_ is Physical"}
		
}

I would loved to have used the SWITCH statement here, but since I’m having to query 2 separate cols it wouldn’t have been the easiest way to do it. However, just in case you get a chance to use it, here’s a quick lesson on SWITCH. It’s like the CASE statement you see in pretty much every language out there. I won’t bother with explaining the details of the syntax because I think it’s fairly self-explanatory. If you do need help though, ping me and I’ll give you a hand, or you could do the responsible thing and bing it on google first.
But here’s a quick example of how you’d use SWITCH.

$a = "hello"

switch ($a)
{
"hello" {"You typed the right word."}
default {"You typed something other than 'hello'."}
}	
}

Again, it’s not a particularly useful example, but it does show you how SWITCH works. It’s really just a CASE statement.

Learning Powershell in Steps

One of the places PS shines is in its ability to let you work the way you want. So often I find someone stagnating in PS because they can’t figure out with “proper” way to do the task. So they stare at the screen for along time, try a few things, and finally give up. The problem with that is that all of a sudden PS is this new thing they don’t understand and they don’t have time to learn it nuances well enough to make it useful. So they get discouraged and don’t even consider it when coming up with solutions to issues. And this is coupled with that fact that sooooooo much of the PS examples out there are by guys who just don’t get it. They post these really long scripts that are written in native SMO when a simple 1-liner will do. Why would you do that to someone? I’ve said many times that if those scripts had been my first intro to PS I wouldn’t have given it a second glance either. To me, these guys can definitely code, but they really don’t *get* what PS is all about.

So here I’m going to help you with what I’m calling the conduit method of learning PS. Because even now when I can’t figure out how to do something in “proper” PS, I still fallback on what I know. I’ve said many times in my SQLSaturday sessions that PS will do pretty much whatever you tell it, and that it’s perfectly compatible with regular DOS commands. And the same thing goes with T-SQL.

Yesterday I needed to take a user account and add it to every DB on a big server. This server had hundreds of DBs on it. Well, of course I went to PS. My goal was to perform the task as fast as I could and go on about my day, so I chose the conduit method instead of the native PS method. Sure, I can call the regular SMO methods and maybe even figure out how that’s done, and figure out the exact params the methods take, etc, but SMO is so poorly documented it’s hard to ferret some of that stuff out. And you know what? Sometimes it’s more important to just get the job done than it is to make sure it’s done in “proper” or native PS.

So what is this conduit method I’m talking about? Well, it’s marrying what you know in T-SQL and using PS only as a conduit for your T-SQL. This usually consists of using Invoke-SqlCmd inside a loop. This way you get the best of both worlds. You get the power of PS and the simplicity of what you already know and you don’t have to rewrite it every time. So again, the conduit method let’s you code in T-SQL, and just delivers it in PS. And it’s a very powerful technique.

OK, so adding a user account to all my DBs. Here’s what the script looked like:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -query "CREATE USER [MyUser] FOR LOGIN [MyUser]"}

So I start at the DB tree in PS. If you run a ‘dir’ command you’ll get a list of DBs. Now, you loop through them, and for each one, you call invoke-sqlcmd and connect to that DB. Now you run the T-SQL inside the -query param. It’s that easy. Now you’ve got a repeatable method for running code against a whole list of DBs on any server and it’ll run whatever is in the -query param.

And if you’ve got code that’s more complicated, you can put it in a file and run it that way. It would look like this:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -InputFile c:\MySQLFile.sql}

With this you can deploy SPs or views, or tables, or whatever you like to every DB. And you can easily alter it to loop across servers too. So even if you’re a beginner you can use this method because it never changes and you can still get the full power out of PS. And this is the point isn’t it?

So many big PS guys completely miss the point that PS is supposed to make our lives easier. They insist that things have to be done a certain way and frankly I’m getting tired of all the little pissing contests that take place. Oh you have to do it this way because it performs better. Or you have to do it this way because it’s more elegant. And while I agree with those to a certain degree, there’s nothing more elegant than getting the job done and not having to spend all day doing it. I’ve had arguments with guys about scripting DB objects and they insist that the native SMO is faster than the way I do it even though mine is a lot less code. And I’ve tested it, and native SMO is faster. On one of my DBs that had several thousand objects the SMO came in like 15secs sooner than the shorter way. Wow, 15secs, really? What will I do with all that extra time? Maybe I can use it to finally write that book I’ve been planning on huh? So for most cases, these differences are just pissing contests that I have no use for.I don’t care if scripting an entire DB takes a few extra secs. I’ll spend far more time than that writing the SMO it takes to save that time.

And since a lot of our work is adhoc in nature, it helps us to be able to write short, tight code. And by adhoc I mean they’re requests that we couldn’t have forseen so we have to write something from scratch. I don’t know about you, but I don’t want my hands in raw SMO 4x a day. I’d rather write a few 1-liners and go to lunch.

So don’t let anyone tell you that the conduit method is worthless or even inferior. It’s just as valid a method as writing 2 pages of SMO to do the same thing, and it performs plenty fast enough. Don’t be afraid of PS. Use the conduit method until you can learn more PS and in the meantime you’ll be more productive than you’ve ever been.

And we’ve got lots of PS vids on MidnightDBA.com, so go check them out and we’ll explain all kinds of things to you.

Powershell Homework Answers

Ok, I got some answers from the PS homework, so now it’s time for my solutions. If you didn’t get a chance to look at the assignments, take a look at them here real quick if you like and test yourself:
http://www.midnightdba.com/DBARant/?p=647
As with anything there can be multiple solutions to each problem, and while I may not give every single version of a solution, I’ll cover the major ones as I think of them. At times I’m also going to expand on the solution and show you a complete script you can use in your environment.

1. Get the total size of all the DBs on a server.

Method 1:

DIR | %{ $Total += $_.Size };
$Total

This is a basic method which doesn’t really require much explanation, but if you would like a full discussion of it, you can go here: http://www.midnightdba.com/DBARant/?p=488

Method 2:

DIR | measure -property Size -sum

Again, there’s not much to say about this.

However, as a bonus, here’s a lovely script that will give you some good size info on your DBs, and then give you the sums at the bottom.

param($ServerName = "localhost")
$ServerName

cd SQLSERVER:\SQL\$ServerName\DEFAULT\Databases;

$a = dir;

$Size = @{Label="Size(MB)"; expression={"{0:N2}" -f ($_.Size)}};
$DataSpaceUsage = @{Label="DataSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.DataSpaceUsage/1024)}};
$IndexSpaceUsage = @{Label="IndexSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.IndexSpaceUsage/1024)}};
$TotalSpaceUsage = @{Label="TotalSpaceUsage(MB)"; expression={"{0:N0}" -f (($_.DataSpaceUsage + $_.IndexSpaceUsage)/1024)}};
$SpaceAvailable = @{Label="SpaceAvailable(MB)"; expression={"{0:N2}" -f ($_.SpaceAvailable/1024)}};

$a | FT Name, $Size, $DataSpaceUsage, $IndexSpaceUsage, $TotalSpaceUsage, $SpaceAvailable -auto

#### Now get the total and put it into diff measures.

$Total = ($a | measure -property size -sum)

# $a | %{$Total += $_.size}

$TotalInMB = @{Label="Total(MB)"; expression={"{0:N2}" -f ($_.sum)}};
$TotalInGB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024)}};
$TotalInTB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024/1024)}};

$Total | FT $TotalInMB, $TotalInGB, $TotalInTB

CD C:\

DIR | %{ $Total += $_.Size };
$Total

2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.

$Servers = "Server1", "Server2", "Server3"

$Servers | %{ 
$curServer = $_ ## Current server in the loop.
gwmi win32_service -ComputerName $_ | ?{$_.DisplayName -match "SQL"} 
} | Format-Table SystemName, DisplayName, StartName -auto

This is one of those that has a few subtle nuance versions, but there’s no need for me to go into each one.
However, here’s my original post on the topic.
http://www.midnightdba.com/DBARant/?p=386

3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.

dir | %{$_.Script() | out-file c:\SP.txt -append; "GO" | out-file c:\SP.txt -append; $_.Name;}

There are a few ways to do this one as well, but again they’re just nuances. And when I say a few ways, I’m really only referring to the proper way to do it in PS, and not the overly complicated SMO versions. While you can definitely script this in SMO, it’s so long I don’t even consider it a solution in comparison to these 1-liners.
As well, I’ve got 2 vids that explain the topic in full. Here’s what you need to know:
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjects/PowershellScriptDBObjects.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjectsWithOptions/PowershellScriptDBObjectsWithOptions.aspx

4. Change the default file location for a group of servers.

$a = dir
$a.settings.DefaultFile = "C:\MyNewFiles"

Ok, while I haven’t officially blogged on this one, there are 2 similar blogs that show you how to get to the level you need to be in order to run this one. So follow those other ones, and you’ll know how to do this one.
http://www.midnightdba.com/DBARant/?p=589
http://www.midnightdba.com/DBARant/?p=601
There’s also my video that shows you how to do this and I think it’s more comprehensive.
http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

5. Cycle through all the jobs on a server and change all their owners to sa.

dir | %{$_.OwnerLoginName = "sa"; $_.Alter(); $_.Name;}

Now, there’s not a lot of explanation here. That’s because I’ve got a vid that explains it all.
http://midnightdba.itbookworm.com/VidPages/PowershellChangingJobOwners/PowershellChangingJobOwners.aspx
Ok, that’s it for the first PS homework assignments. You’ll notice that I assumed you knew where to run these from. I can’t very well teach every single basic every single time. However, I tried to give you resources to find the answers.

Test your Powershell prowess

Ok, well I’ve done quite a few PS posts now, and while it’s great to learn, it’s also great to test yourself. So here are some exercises you can use to test your skills. These are all very common PS taskes for enterprise admins. Some will be easy, and some will be more difficult. So here, take some time and see how it goes. And if you like, post your answers in the comments of this post for everyone else to see.

I’ll post my answers in a separate post. And again all these solutions will be in PS

1. Get the total size of all the DBs on a server.
2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.
3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.
4. Change the default file location for a group of servers.
5. Cycle through all the jobs on a server and change all their owners to sa.

Alrighty… that’s about it. Good luck.

Dropping DBs in Powershell

As long as I’m on a roll here with server-level ops, I thought I’d go ahead and cover dropping DBs in PS. This should be a fairly short post as there’s not much to this, but it’s worth talking about anyway.

Dropping DBs isn’t any big deal in T-SQL:

DROP DATABASE SeansDB

And the PS method for doing this is of course a 1-liner, but it’s not as easy as T-SQL. Again there’s more than one method for doing it and I’ll show you both of them. Here, whether you connect through sqlps, or through PS proper, you want to be at the Database node. Here’s a screenshot of doing it from sqlps, but again, it really doesn’t matter.

Method 1

dir | %{$_.Drop()

There are a couple things to note here. First of all, this is typical PS, so if you run this cmd as it’s written here, you’ll drop every DB on your server… totally uncool. And 2nd, I’m always talking about how if you see an Alter() method on the get-member list then that usually means it wants you to use it before the changes are pushed to the server itself. Well, this is one of those times that makes me put the ‘usually’ in there because while that’s a good rule of thumb, PS is nice enough to drop objects when you ask it to. So anyway, unless you want to lose everything, I wouldn’t run the code above. I just wanted to have a basis for future syntax.

So all we have to do now is limit our result list to the DB we’re interested in:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.Drop()

It just doesn’t get any easier than that. Now, at this point T-SQL is still ahead, cause even I would still much rather use T-SQL for dropping a single DB. Powershell is going to pull ahead pretty fast when it comes to dropping several DBs, or even a single DB on multiple servers.

Let’s say you’ve got several test DBs, and they all have the word ‘test’ in them somewhere. Since different devs created them, they don’t have a solid naming convention. Hell, some of them may even just have ‘tst’ in them, who knows, right?
At this point it’s just a matter of altering the above script so that it can accommodate your criteria.

dir | ?{$_.Name -match "test" -or $_.Name -match "tst"} | %{$_.Drop()

T-SQL would require you to code a cursor for this, and while the for-each is technically a cursor in PS, it takes next to no coding for us. PS is starting to pull ahead a little now. And by simply changing the where-object criteria, you can easily change this script to do things that are more difficult in T-SQL like dropping DBs that are all owned by a certain login, or were created on a certain date, or are under or over a certain size, or even haven’t been backed up recently. Of course, some of that criteria you’d never use, but it’s all there for you. And again, you can find out what you can use by doing a get-member and anything that comes up as a property is usable.

dir | gm}

Method 2
Now let’s look at a shorter method for dropping a single DB.

(dir SeansDB).Drop()}

That’s pretty easy, and to do multiple DBs, it could look something like this:

(dir | ?{$_.Name -match "test" -or $_.Name -match "tst").Drop()

Now, if you have a DB that exists across multiple boxes then you can drop all of them like this:

$a = "server1", "server2", "server3"
$a | %{
$ServerName = $_; ## Just setting this to make the script easier to read.
cd sqlserver:\sql\$ServerName
(dir SeansDB).Drop()}

And that’s it. Going against multiple boxes is so easy. And the way I’m doing it by setting the $a var to the server list, you can easily populate $a anyway you like and you don’t have to change the rest of the script. So make it the results from a SQL query, or from a txt file, etc. It doesn’t matter.

OK, so that’s it this time. Be careful with this or you’ll find yourself doing a recovery on all your DBs. Man, it would be really easy to be evil with this huh? In less than 60secs you could kill every DB in your company if you wanted to… say if they fired you for no reason and you wanted to get back at them. I’m not saying you should, I’m just saying it’s possible.
And after something like that I’m compelled to say (again) that I’m not responsible for your (mis)use of any of these methods.