Tag Archives: DBA

Partitioning and the free lunch

Quite often there’s no benefit to a feature if you’re not using it, but this time that’s not the case. Even if you’re not using partitioning you can still take advantage of it in a really cool way.

Let’s say you’ve got a table that you load every night, and you want to move the current data to a new table and truncate the current one. Here’s how you’d do something like that today.

1. Select * into T2 from T1
2. Insert into T2 select * from T1
3. Rename T1 to T2. Then recreate T1.

But let’s face it, none of those choices are really ideal as they take time. Though of those I would choose 1, 3, 2 in order. I don’t like renaming objects because the new object now has a new ID, and it’s hard to track them over time if they keep changing IDs. So those are your choices if you’re not taking advantage of partitioning.

If you ARE taking advantage of partitioning however, you’ve got a different choice. Now you can switch the data out of the current table and into the 2nd w/o any data actually moving at all. And the best part is that you don’t even have to have partitioned your tables at all. That’s right… your tables are automatically partitioned as a single-partition table so you’re using partitioning whether you know it or not. So that means that in order to truncate T1 and move its data to T2 you only have to do this:

alter table T1
switch partition 1 to T2

Now there are some caveats, but as with most things it’s not too bad. The biggest 3 are:
1. The table structures have to be identical.
2. They have to be on the same filegroup. So the partition you’re switching out and the one you’re switching it into have to be on the same filegroup.
3. The target table/partition has to be empty.

There are some others that are advised, but those 3 are the only show STOPPERS. And once you’ve got the data into the new table, you can then move it to a new filegroup, or do whatever with it you like.

So anyway, it’s a wicked cool method for getting rid of your current data even though you’re not officially partitioning your table.

Learning Powershell in Steps

One of the places PS shines is in its ability to let you work the way you want. So often I find someone stagnating in PS because they can’t figure out with “proper” way to do the task. So they stare at the screen for along time, try a few things, and finally give up. The problem with that is that all of a sudden PS is this new thing they don’t understand and they don’t have time to learn it nuances well enough to make it useful. So they get discouraged and don’t even consider it when coming up with solutions to issues. And this is coupled with that fact that sooooooo much of the PS examples out there are by guys who just don’t get it. They post these really long scripts that are written in native SMO when a simple 1-liner will do. Why would you do that to someone? I’ve said many times that if those scripts had been my first intro to PS I wouldn’t have given it a second glance either. To me, these guys can definitely code, but they really don’t *get* what PS is all about.

So here I’m going to help you with what I’m calling the conduit method of learning PS. Because even now when I can’t figure out how to do something in “proper” PS, I still fallback on what I know. I’ve said many times in my SQLSaturday sessions that PS will do pretty much whatever you tell it, and that it’s perfectly compatible with regular DOS commands. And the same thing goes with T-SQL.

Yesterday I needed to take a user account and add it to every DB on a big server. This server had hundreds of DBs on it. Well, of course I went to PS. My goal was to perform the task as fast as I could and go on about my day, so I chose the conduit method instead of the native PS method. Sure, I can call the regular SMO methods and maybe even figure out how that’s done, and figure out the exact params the methods take, etc, but SMO is so poorly documented it’s hard to ferret some of that stuff out. And you know what? Sometimes it’s more important to just get the job done than it is to make sure it’s done in “proper” or native PS.

So what is this conduit method I’m talking about? Well, it’s marrying what you know in T-SQL and using PS only as a conduit for your T-SQL. This usually consists of using Invoke-SqlCmd inside a loop. This way you get the best of both worlds. You get the power of PS and the simplicity of what you already know and you don’t have to rewrite it every time. So again, the conduit method let’s you code in T-SQL, and just delivers it in PS. And it’s a very powerful technique.

OK, so adding a user account to all my DBs. Here’s what the script looked like:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -query "CREATE USER [MyUser] FOR LOGIN [MyUser]"}

So I start at the DB tree in PS. If you run a ‘dir’ command you’ll get a list of DBs. Now, you loop through them, and for each one, you call invoke-sqlcmd and connect to that DB. Now you run the T-SQL inside the -query param. It’s that easy. Now you’ve got a repeatable method for running code against a whole list of DBs on any server and it’ll run whatever is in the -query param.

And if you’ve got code that’s more complicated, you can put it in a file and run it that way. It would look like this:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -InputFile c:\MySQLFile.sql}

With this you can deploy SPs or views, or tables, or whatever you like to every DB. And you can easily alter it to loop across servers too. So even if you’re a beginner you can use this method because it never changes and you can still get the full power out of PS. And this is the point isn’t it?

So many big PS guys completely miss the point that PS is supposed to make our lives easier. They insist that things have to be done a certain way and frankly I’m getting tired of all the little pissing contests that take place. Oh you have to do it this way because it performs better. Or you have to do it this way because it’s more elegant. And while I agree with those to a certain degree, there’s nothing more elegant than getting the job done and not having to spend all day doing it. I’ve had arguments with guys about scripting DB objects and they insist that the native SMO is faster than the way I do it even though mine is a lot less code. And I’ve tested it, and native SMO is faster. On one of my DBs that had several thousand objects the SMO came in like 15secs sooner than the shorter way. Wow, 15secs, really? What will I do with all that extra time? Maybe I can use it to finally write that book I’ve been planning on huh? So for most cases, these differences are just pissing contests that I have no use for.I don’t care if scripting an entire DB takes a few extra secs. I’ll spend far more time than that writing the SMO it takes to save that time.

And since a lot of our work is adhoc in nature, it helps us to be able to write short, tight code. And by adhoc I mean they’re requests that we couldn’t have forseen so we have to write something from scratch. I don’t know about you, but I don’t want my hands in raw SMO 4x a day. I’d rather write a few 1-liners and go to lunch.

So don’t let anyone tell you that the conduit method is worthless or even inferior. It’s just as valid a method as writing 2 pages of SMO to do the same thing, and it performs plenty fast enough. Don’t be afraid of PS. Use the conduit method until you can learn more PS and in the meantime you’ll be more productive than you’ve ever been.

And we’ve got lots of PS vids on MidnightDBA.com, so go check them out and we’ll explain all kinds of things to you.

Powershell Homework Answers

Ok, I got some answers from the PS homework, so now it’s time for my solutions. If you didn’t get a chance to look at the assignments, take a look at them here real quick if you like and test yourself:
http://www.midnightdba.com/DBARant/?p=647
As with anything there can be multiple solutions to each problem, and while I may not give every single version of a solution, I’ll cover the major ones as I think of them. At times I’m also going to expand on the solution and show you a complete script you can use in your environment.

1. Get the total size of all the DBs on a server.

Method 1:

DIR | %{ $Total += $_.Size };
$Total

This is a basic method which doesn’t really require much explanation, but if you would like a full discussion of it, you can go here: http://www.midnightdba.com/DBARant/?p=488

Method 2:

DIR | measure -property Size -sum

Again, there’s not much to say about this.

However, as a bonus, here’s a lovely script that will give you some good size info on your DBs, and then give you the sums at the bottom.

param($ServerName = "localhost")
$ServerName

cd SQLSERVER:\SQL\$ServerName\DEFAULT\Databases;

$a = dir;

$Size = @{Label="Size(MB)"; expression={"{0:N2}" -f ($_.Size)}};
$DataSpaceUsage = @{Label="DataSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.DataSpaceUsage/1024)}};
$IndexSpaceUsage = @{Label="IndexSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.IndexSpaceUsage/1024)}};
$TotalSpaceUsage = @{Label="TotalSpaceUsage(MB)"; expression={"{0:N0}" -f (($_.DataSpaceUsage + $_.IndexSpaceUsage)/1024)}};
$SpaceAvailable = @{Label="SpaceAvailable(MB)"; expression={"{0:N2}" -f ($_.SpaceAvailable/1024)}};

$a | FT Name, $Size, $DataSpaceUsage, $IndexSpaceUsage, $TotalSpaceUsage, $SpaceAvailable -auto

#### Now get the total and put it into diff measures.

$Total = ($a | measure -property size -sum)

# $a | %{$Total += $_.size}

$TotalInMB = @{Label="Total(MB)"; expression={"{0:N2}" -f ($_.sum)}};
$TotalInGB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024)}};
$TotalInTB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024/1024)}};

$Total | FT $TotalInMB, $TotalInGB, $TotalInTB

CD C:\

DIR | %{ $Total += $_.Size };
$Total

2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.

$Servers = "Server1", "Server2", "Server3"

$Servers | %{ 
$curServer = $_ ## Current server in the loop.
gwmi win32_service -ComputerName $_ | ?{$_.DisplayName -match "SQL"} 
} | Format-Table SystemName, DisplayName, StartName -auto

This is one of those that has a few subtle nuance versions, but there’s no need for me to go into each one.
However, here’s my original post on the topic.
http://www.midnightdba.com/DBARant/?p=386

3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.

dir | %{$_.Script() | out-file c:\SP.txt -append; "GO" | out-file c:\SP.txt -append; $_.Name;}

There are a few ways to do this one as well, but again they’re just nuances. And when I say a few ways, I’m really only referring to the proper way to do it in PS, and not the overly complicated SMO versions. While you can definitely script this in SMO, it’s so long I don’t even consider it a solution in comparison to these 1-liners.
As well, I’ve got 2 vids that explain the topic in full. Here’s what you need to know:
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjects/PowershellScriptDBObjects.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjectsWithOptions/PowershellScriptDBObjectsWithOptions.aspx

4. Change the default file location for a group of servers.

$a = dir
$a.settings.DefaultFile = "C:\MyNewFiles"

Ok, while I haven’t officially blogged on this one, there are 2 similar blogs that show you how to get to the level you need to be in order to run this one. So follow those other ones, and you’ll know how to do this one.
http://www.midnightdba.com/DBARant/?p=589
http://www.midnightdba.com/DBARant/?p=601
There’s also my video that shows you how to do this and I think it’s more comprehensive.
http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

5. Cycle through all the jobs on a server and change all their owners to sa.

dir | %{$_.OwnerLoginName = "sa"; $_.Alter(); $_.Name;}

Now, there’s not a lot of explanation here. That’s because I’ve got a vid that explains it all.
http://midnightdba.itbookworm.com/VidPages/PowershellChangingJobOwners/PowershellChangingJobOwners.aspx
Ok, that’s it for the first PS homework assignments. You’ll notice that I assumed you knew where to run these from. I can’t very well teach every single basic every single time. However, I tried to give you resources to find the answers.

Changing user passwords

Most of us have sa and have for a long time, so exactly what it takes for an end user to change his own password isn’t typically a concern for us. And since it came up at work yesterday I decided to make a video about it.
So if you’d like to see what it takes for a user to be able to change his own password, you can see the new vid here:
http://midnightdba.itbookworm.com/VidPages/ChangeUsersPword/ChangeUsersPword.aspx

Over-tuning Backups

This is going to be a short post, but an important one to demonstrate the importance of thinking about what you’re doing. I recently taught a backup tuning session at SQLSaturday 90 in OKC. You can see the session recording here if you like. And I’ve been teaching this session off and on recently so I’m not really sure where this user saw my session, but I had a user write me with a tuning issue and I thought I’d share it with you.

In his email he outlined his restore scenario that he was unable to tune. No matter what he did, the restore didn’t happen any faster, and he would expect to see some improvement. And in fact, his attempts to tune it have only made it take longer. Here’s the code he sent me, and I’ve changed things to protect his environment.

RESTORE DATABASE AdventureWorks
PAGE = '1:200'
FROM DISK = 'c:\AW.bak'
WITH init, FORMAT,
buffercount = 500,
maxtransfersize = 2097152,
norecovery

So there’s nothing wrong with this syntax, and the restore should work just fine. But he’s right, he won’t see any performance gain at all and here’s why. How fast do you think you can restore a single page?
There’s just no way to copy 8KB of data any faster than it already will. And I’m not surprised at all that his restore takes longer. SQL is still gonna spin up all those resources because you’re telling it to. But since you’re only copying a single page, it’s not going to use them. So spinning up the resources won’t do you any good on this small operation and will only serve to cost you extra time. The resource spin-up will only take a few seconds, but on a page restore like this, it can be in the neighborhood of double the time. So it could take the restore from say 6-10secs to like 10-20secs or something like that. So it’s really not a big deal time-wise.

If you came in late to the backup tuning debate, the flags we’re arguing over are buffercount and maxtransfersize.
I’ll let you watch the session above for a full explanation of them, but I’ll go ahead and say here that 500 buffers just doesn’t even make sense. And transferring 8KB worth of data in 2MB chunks doesn’t make sense either.

The point here is this is a definite case where there’s really nothing to even tune. You can only move 8KB so fast. And while I applaud his efforts to apply something he learned in his shop, he needs to put a little more thought into where he applies it. However, he did the right thing. He tested, he saw an issue, and then he emailed me. We’ve discussed it fully and he now understands why he saw what he did. This is how you learn. And even more importantly, he’s out there trying to apply something he learned from a session. He’s not content with leaving things the way they are in his shop, and he’s not content with his current level of skill. You go girlfriend.

And he’s given me permission to blog about this issue so I don’t want any emails about how I screwed over a viewer who came to me for help.

Test your Powershell prowess

Ok, well I’ve done quite a few PS posts now, and while it’s great to learn, it’s also great to test yourself. So here are some exercises you can use to test your skills. These are all very common PS taskes for enterprise admins. Some will be easy, and some will be more difficult. So here, take some time and see how it goes. And if you like, post your answers in the comments of this post for everyone else to see.

I’ll post my answers in a separate post. And again all these solutions will be in PS

1. Get the total size of all the DBs on a server.
2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.
3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.
4. Change the default file location for a group of servers.
5. Cycle through all the jobs on a server and change all their owners to sa.

Alrighty… that’s about it. Good luck.

Daddy is happy

Sometimes there’s a method to my madness. I’ve got a Jr. on my staff who is just now learning the ropes. Now, I don’t mean a Jr. in the sense that he just doesn’t know as much SQL as I do, I mean it in the real sense. He actually knows practically nothing. He’s not even a big Windows guy.

So teaching him can be challenging because where do I start? Do I start with windows, networking, data types, what? Well, I decided to start him on backups. Like many times before I figured that would give him the best chances to learn the basics. See, in any large shop, you do backup/restore constantly. There’s always someone needing a DB backed up or restored to one location or another. So there’s plenty of opportunity to practice. But practicing backup/restore is just one of the reasons I throw beginners into that area. The other reason is to teach them good solid basics all around.

Because along with the plethora of backup/restore in a large environment, there’s also a plethora of problems that go along with it. You try to backup to a location and find out you don’t have enough space so you have to decide what goes and what stays. You try to restore to a DB and you don’t have enough space, so you have to restore different files to different drives. You find one acct doesn’t have permissions to a location, so you have to take care of that. Then you find out that backups have been failing on these boxes over here so you have to look into what happened there. Then you have to work with the tape guy to bring some backups from offsite and try to find space on a server to restore them. Maybe even you have tricky restore that requires some actual thought too. The point is that there are so many pitfalls in backup/restore, that it’s an excellent place to start a beginner. Without even knowing or trying, they learn networking, windows security, SQL security, backup/restore syntax, file rotation methods, space mgmt, DNS, trace flags, and more. It’s a fabulous way to begin your career as a DBA.

Now, why am I so happy? Well because in this case my Jr. was talking to someone and trying to explain to them why he couldn’t restore their DB the way they wanted. He actually did a pretty good job. When he was finished, he looked over to see me smiling at him. He instantly said, what? I said, oh nothing. Then he’s like, then why are you staring at me. I said, did you hear what you just said to that guy? He goes, no. I was just explaining to him why I couldn’t restore his DB the way the needed. I said, ok, and why is that. He went on to explain to me what the problem was. I said, ok then how can we fix that. He said, well under the current circumstances we really can’t. That’s what I was trying to explain to him. I just smiled again. He said, what? Why do you keep doing that?

I said, look at you. 2 months ago you wouldn’t have been able to do that much less explain it. I never taught you that, where did you come up with it? He said, well I’ve done it so much and I’ve just learned that… all of a sudden his eyes got really wide. He looked at me and said… OHHHHH so that’s why you want me doing all the backup/restore. Because I learn all this other stuff with it. I said now you’re getting it. And it’s just cool to see that it’s working.

This is what I’m talking about when I talk about mentoring. A good mentor can show you where you need to go in the order you need it. And I’m not bragging, I’m just saying that it takes more than just reading books. You have to be with someone who’s been there a couple times. He can teach you how to think like a DBA. Hell, I’ve learned all my .NET from books and blog samples, and I actually suck. I get things done, but I’m no real coder in any sense of the word. Well, I’m on the receiving end of that now cause our big .NET guy at work ahs been showing me things and actually mentoring me a little, and it’s really cool the things I don’t nkokw. So I’m even getting it a little myself. Cool stuff that.

Dropping DBs in Powershell

As long as I’m on a roll here with server-level ops, I thought I’d go ahead and cover dropping DBs in PS. This should be a fairly short post as there’s not much to this, but it’s worth talking about anyway.

Dropping DBs isn’t any big deal in T-SQL:

DROP DATABASE SeansDB

And the PS method for doing this is of course a 1-liner, but it’s not as easy as T-SQL. Again there’s more than one method for doing it and I’ll show you both of them. Here, whether you connect through sqlps, or through PS proper, you want to be at the Database node. Here’s a screenshot of doing it from sqlps, but again, it really doesn’t matter.

Method 1

dir | %{$_.Drop()

There are a couple things to note here. First of all, this is typical PS, so if you run this cmd as it’s written here, you’ll drop every DB on your server… totally uncool. And 2nd, I’m always talking about how if you see an Alter() method on the get-member list then that usually means it wants you to use it before the changes are pushed to the server itself. Well, this is one of those times that makes me put the ‘usually’ in there because while that’s a good rule of thumb, PS is nice enough to drop objects when you ask it to. So anyway, unless you want to lose everything, I wouldn’t run the code above. I just wanted to have a basis for future syntax.

So all we have to do now is limit our result list to the DB we’re interested in:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.Drop()

It just doesn’t get any easier than that. Now, at this point T-SQL is still ahead, cause even I would still much rather use T-SQL for dropping a single DB. Powershell is going to pull ahead pretty fast when it comes to dropping several DBs, or even a single DB on multiple servers.

Let’s say you’ve got several test DBs, and they all have the word ‘test’ in them somewhere. Since different devs created them, they don’t have a solid naming convention. Hell, some of them may even just have ‘tst’ in them, who knows, right?
At this point it’s just a matter of altering the above script so that it can accommodate your criteria.

dir | ?{$_.Name -match "test" -or $_.Name -match "tst"} | %{$_.Drop()

T-SQL would require you to code a cursor for this, and while the for-each is technically a cursor in PS, it takes next to no coding for us. PS is starting to pull ahead a little now. And by simply changing the where-object criteria, you can easily change this script to do things that are more difficult in T-SQL like dropping DBs that are all owned by a certain login, or were created on a certain date, or are under or over a certain size, or even haven’t been backed up recently. Of course, some of that criteria you’d never use, but it’s all there for you. And again, you can find out what you can use by doing a get-member and anything that comes up as a property is usable.

dir | gm}

Method 2
Now let’s look at a shorter method for dropping a single DB.

(dir SeansDB).Drop()}

That’s pretty easy, and to do multiple DBs, it could look something like this:

(dir | ?{$_.Name -match "test" -or $_.Name -match "tst").Drop()

Now, if you have a DB that exists across multiple boxes then you can drop all of them like this:

$a = "server1", "server2", "server3"
$a | %{
$ServerName = $_; ## Just setting this to make the script easier to read.
cd sqlserver:\sql\$ServerName
(dir SeansDB).Drop()}

And that’s it. Going against multiple boxes is so easy. And the way I’m doing it by setting the $a var to the server list, you can easily populate $a anyway you like and you don’t have to change the rest of the script. So make it the results from a SQL query, or from a txt file, etc. It doesn’t matter.

OK, so that’s it this time. Be careful with this or you’ll find yourself doing a recovery on all your DBs. Man, it would be really easy to be evil with this huh? In less than 60secs you could kill every DB in your company if you wanted to… say if they fired you for no reason and you wanted to get back at them. I’m not saying you should, I’m just saying it’s possible.
And after something like that I’m compelled to say (again) that I’m not responsible for your (mis)use of any of these methods.

Killing SPIDs in Powershell

Today we’re going to continue our exploration of the server-level in PS.  You’ll remember before we’ve explored it a little bit when we talked about querying cluster nodes.

So today we’re going to kill users in a DB.  This is incredibly easy to do in PS.

First, let’s take a look at the T-SQL counterpart. Pretty much everyone has a script like this.

DECLARE @currUser varchar(100),
		@SQL nvarchar(200)

DECLARE Users CURSOR
FOR select DISTINCT spid from sys.sysprocesses
WHERE [program_name] LIKE '%businessobjects%'

OPEN Users

	FETCH NEXT FROM Users INTO @currUser
	WHILE (@@fetch_status <> -1)
	BEGIN

		SET @SQL = 'KILL ' + @currUser-- + ''
		EXEC (@SQL)
		--print @SQL
		
FETCH NEXT FROM Users INTO @currUser
	END

CLOSE Users
DEALLOCATE Users

And of course, the problem is as it always is in T-SQL. It doesn’t scale to several boxes, and if you find yourself w/o your script, it’s not not really easy to reproduce off the top of your head unless you’re one of those freaks who can write cursors w/o looking at a map. And I don’t know about you, but I quite often find myself w/o my scripts, so I like things that are easier to reproduce.

And like I said, PS scales very well to multiple boxes. Why would you want to run it against more than one box you ask?
Well, let’s say you’ve got a single app server that hits multiple DB servers and you want to kill them all for a maint window. You can’t assume that stopping the service on that app server will kill the DB connections. In fact, it quite often doesn’t. There are other testing scenarios where this could be useful, but I’ll let all of you use it as you see fit. The point is it’s here if you need it.

So to do this in powershell:
Start by connecting to a DB at the server level (either in sqlps or in PS proper… my examples will be in sqlps, but it really doesn’t matter as long as you connect).

In sqlps you can do this by doing a right-click on the servername and going to ‘Start Powershell’.

Then you just need to go up one level like this:

>cd..

Now you’re ready for the command. 

So let’s say you want to drop all the spids connected to SeansDB.  And as usual, there are 2 ways to do this.  I’m going to show you both of them just for completeness.

Method 1:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.KillAllProcesses("SeansDB")}

Now, those of you who know PS, know this is actually quite wasteful. What you’re doing is getting a list of all the DBs, then filtering it down to one and then running the method.
And of course since you call the method with the DB name to begin with this is actually useless. However, it’s still a legal method so I wanted to mention it.

Method 2:You’ve got to think about what you’re doing here so you can make the right decision.  When you do this in sys.sysprocesses, you’re working at the server-level, so the only place individual DBs come into play is with the result set of whatever cursor you write to limit your results. And quite often killing all the spids in a DB can be very useful. So here’s a better way to go about it in PS.

(dir).KillAllProcesses("SeansDB")

This is easy to remember, and easy to type. And of course, it’s obvious what it does… it kills all the spids connected to SeansDB. Now, you can also kill just a specific spid like this:

(dir).KillProcess(69)

And that’s how PS can bring that long t-sql cursor down to almost nothing. What? What’s that? You want more? You want to be able to kill spids based off of application, or CPU, or some other criteria? Well, now you’re just being demanding, but I think PS can do something for you. This is just going to take an extra step and here’s what it’ll look like.
In our example here let’s kill spids by application. So we’ll kill all users who are connecting through SSMS.

$a = (dir).EnumProcesses()
$a | ?{$_.Program -match "SQL Server Management Studio"} | %{$_.KillProcess($_.SPID)

Now, there’s a treasure trove of stuff in here. Let’s take a look at this in detail, especially the EnumProcesses() method.
This method is overloaded so you’ve got some nice functionality built-in.
Here are the documented overloads:

$a = (dir).EnumProcesses() ## Gets all processes.
$a = (dir).EnumProcesses($False) ## Excludes system processes.
$a = (dir).EnumProcesses(69) ## Get info on a single spid... 69 in this case.
$a = (dir).EnumProcesses("domain\user") ## Gets processes run by a specified login.

And now that you’ve got your list of processes, you can do a get-member on them to see what properties are available to you. So remember above when we killed spids by Program? Well, you can kill by anything returned as a property from your get-member. Here’s a screenshot of the results I got from mine.

Killing processes in powershell is so easy it’s almost makes me feel stupid for documenting it. And while you may be used to doing things like this in T-SQL, give this a try and I think you’ll find you like it much better once you get used to it.
And I mentioned that it scales to multiple boxes really well, and it does. I just didn’t show you that here cause it’s pretty easy to figure out.

And DO use this with care. It’s so much easier to kill everything on the box in PS than it is with T-SQL. And I’m not taking any responsibility for how you (mis)use this knowledge.

Checking for Active cluster node

Ok, I was on this blog today and it showed a cool Powershell method for checking which cluster node is active.  And while there’s nothing wrong with the script, it does go to show the thing I dislike the most about how many DBAs work with Powershell in SQL Server.  Now, I can’t stress enough that I’m not picking on this guy, and I’m not saying he’s wrong.  I’m just saying that I prefer to rely on the built-in methods of the provider because it’s just simpler.  Here’s what I mean.

I’m going to borrow his code for a moment just to show you an example.  But I encourage you to visit his blog yourself as he’s got some cool stuff out there.  Anyway, here’s his method for doing this in PS:

# Set cluster name
$cluster_name = "ClusterName";
# Load SMO extension
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null;
$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $cluster_name;
# Get server properties
$properties = $srv.Properties
$owner_node = $properties.Item("ComputerNamePhysicalNetBIOS").Value;
$is_clustered = $properties.Item("IsClustered").Value
if($is_clustered)
{
	Write-Host "The current active node of $cluster_name is $owner_node.";
}
else
{
	Write-Host "$cluster_name is not a clustered instance of SQL Server.";
}

Overall, there’s nothing wrong with that except it’s just too long.  See, too many DBAs forget what Powershell is all about so they’re constantly re-inventing the wheel.  I like to use the kind of wheels I’m given whenever possible.

Here’s how I would tackle this same task (there are actually 2 ways):

Method 1

>$a = dir

>$a.ComputerNamePhysicalNetBIOS

Method 2

>(dir).ComputerNamePhysicalNetBIOS

And that’s it.  Why would you want to type all that other stuff when you can so easily type just a few characters… 33 to be exact. 

All you have to do is make sure you’re in the server node of the provider tree. 

So if you connect through sqlps, then you’ll right-click on the server itself and that’ll take you to the Default instance.  Then you just need to go one level up by typing ‘cd ..’  From there, just type one of the lines above and you’re golden.

Oh yeah, his script also told you whether the box was even clustered to begin with.  We can handle that with another line.

>$a.IsClustered

That returns ‘True’ so if you want to make it pretty for yourself then just a quick change can make that happen.

>IF ($a.IsClustered) {“It’s clustered alright.”} ELSE {“No it isn’t.”}

And strictly speaking you don’t need the ELSE in there.

Now, using both methods you can easily cycle through a ton of boxes so there are no worries there.  

I’m gonna get on my soapbox for a minute and say that this is the bulk of the PS I see being taught and to me it shows a lack of fundamental understanding of what PS is supposed to do for us.  I just prefer to use what the provider gives me.  And it really matters too.  If you find yourself somewhere without your scripts, which happens to me all the time when I’m working from a user’s box, or at another shop, then you’ve got to remember all that SMO provider info.  My method is not only much easier to remember, it’s much easier to investigate because you can easily do a GM against $a to see all the properties it has.  But if you go even just a couple weeks without typing in that code to load the SMO provider, you can forget nuances and you’ll find yourself looking something up.  And that’s not efficient DBA work.  PS is supposed to make our jobs easier, not complicate them.  PS isn’t just a replacement for VBScript.  It’s an entirely new way of thinking.  And there are tons of guys out there teaching PS who haven’t switched to the PS way of thinking.  To me this is the exact same thing as those guys who use those really complicated cursor methods for finding log space usage when they could just type DBCC sqlperf(logspace).  Can you do it the other way, sure, but why would you when the SQL team has given you such an easy method?  This is why most of the time whenever I see a PS blog somewhere, I typically have to translate it into real PS.

So guys, let’s get rid of all that needless SMO code and bring PS back to what it’s supposed to be… simple one-liners that make our job easier.

I also did a video last week about changing server-level properties that talks you through the same methods as well.  Take a look.

http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx