Category Archives: Powershell

Dropping DBs in Powershell

As long as I’m on a roll here with server-level ops, I thought I’d go ahead and cover dropping DBs in PS. This should be a fairly short post as there’s not much to this, but it’s worth talking about anyway.

Dropping DBs isn’t any big deal in T-SQL:

DROP DATABASE SeansDB

And the PS method for doing this is of course a 1-liner, but it’s not as easy as T-SQL. Again there’s more than one method for doing it and I’ll show you both of them. Here, whether you connect through sqlps, or through PS proper, you want to be at the Database node. Here’s a screenshot of doing it from sqlps, but again, it really doesn’t matter.

Method 1

dir | %{$_.Drop()

There are a couple things to note here. First of all, this is typical PS, so if you run this cmd as it’s written here, you’ll drop every DB on your server… totally uncool. And 2nd, I’m always talking about how if you see an Alter() method on the get-member list then that usually means it wants you to use it before the changes are pushed to the server itself. Well, this is one of those times that makes me put the ‘usually’ in there because while that’s a good rule of thumb, PS is nice enough to drop objects when you ask it to. So anyway, unless you want to lose everything, I wouldn’t run the code above. I just wanted to have a basis for future syntax.

So all we have to do now is limit our result list to the DB we’re interested in:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.Drop()

It just doesn’t get any easier than that. Now, at this point T-SQL is still ahead, cause even I would still much rather use T-SQL for dropping a single DB. Powershell is going to pull ahead pretty fast when it comes to dropping several DBs, or even a single DB on multiple servers.

Let’s say you’ve got several test DBs, and they all have the word ‘test’ in them somewhere. Since different devs created them, they don’t have a solid naming convention. Hell, some of them may even just have ‘tst’ in them, who knows, right?
At this point it’s just a matter of altering the above script so that it can accommodate your criteria.

dir | ?{$_.Name -match "test" -or $_.Name -match "tst"} | %{$_.Drop()

T-SQL would require you to code a cursor for this, and while the for-each is technically a cursor in PS, it takes next to no coding for us. PS is starting to pull ahead a little now. And by simply changing the where-object criteria, you can easily change this script to do things that are more difficult in T-SQL like dropping DBs that are all owned by a certain login, or were created on a certain date, or are under or over a certain size, or even haven’t been backed up recently. Of course, some of that criteria you’d never use, but it’s all there for you. And again, you can find out what you can use by doing a get-member and anything that comes up as a property is usable.

dir | gm}

Method 2
Now let’s look at a shorter method for dropping a single DB.

(dir SeansDB).Drop()}

That’s pretty easy, and to do multiple DBs, it could look something like this:

(dir | ?{$_.Name -match "test" -or $_.Name -match "tst").Drop()

Now, if you have a DB that exists across multiple boxes then you can drop all of them like this:

$a = "server1", "server2", "server3"
$a | %{
$ServerName = $_; ## Just setting this to make the script easier to read.
cd sqlserver:\sql\$ServerName
(dir SeansDB).Drop()}

And that’s it. Going against multiple boxes is so easy. And the way I’m doing it by setting the $a var to the server list, you can easily populate $a anyway you like and you don’t have to change the rest of the script. So make it the results from a SQL query, or from a txt file, etc. It doesn’t matter.

OK, so that’s it this time. Be careful with this or you’ll find yourself doing a recovery on all your DBs. Man, it would be really easy to be evil with this huh? In less than 60secs you could kill every DB in your company if you wanted to… say if they fired you for no reason and you wanted to get back at them. I’m not saying you should, I’m just saying it’s possible.
And after something like that I’m compelled to say (again) that I’m not responsible for your (mis)use of any of these methods.

Killing SPIDs in Powershell

Today we’re going to continue our exploration of the server-level in PS.  You’ll remember before we’ve explored it a little bit when we talked about querying cluster nodes.

So today we’re going to kill users in a DB.  This is incredibly easy to do in PS.

First, let’s take a look at the T-SQL counterpart. Pretty much everyone has a script like this.

DECLARE @currUser varchar(100),
		@SQL nvarchar(200)

DECLARE Users CURSOR
FOR select DISTINCT spid from sys.sysprocesses
WHERE [program_name] LIKE '%businessobjects%'

OPEN Users

	FETCH NEXT FROM Users INTO @currUser
	WHILE (@@fetch_status <> -1)
	BEGIN

		SET @SQL = 'KILL ' + @currUser-- + ''
		EXEC (@SQL)
		--print @SQL
		
FETCH NEXT FROM Users INTO @currUser
	END

CLOSE Users
DEALLOCATE Users

And of course, the problem is as it always is in T-SQL. It doesn’t scale to several boxes, and if you find yourself w/o your script, it’s not not really easy to reproduce off the top of your head unless you’re one of those freaks who can write cursors w/o looking at a map. And I don’t know about you, but I quite often find myself w/o my scripts, so I like things that are easier to reproduce.

And like I said, PS scales very well to multiple boxes. Why would you want to run it against more than one box you ask?
Well, let’s say you’ve got a single app server that hits multiple DB servers and you want to kill them all for a maint window. You can’t assume that stopping the service on that app server will kill the DB connections. In fact, it quite often doesn’t. There are other testing scenarios where this could be useful, but I’ll let all of you use it as you see fit. The point is it’s here if you need it.

So to do this in powershell:
Start by connecting to a DB at the server level (either in sqlps or in PS proper… my examples will be in sqlps, but it really doesn’t matter as long as you connect).

In sqlps you can do this by doing a right-click on the servername and going to ‘Start Powershell’.

Then you just need to go up one level like this:

>cd..

Now you’re ready for the command. 

So let’s say you want to drop all the spids connected to SeansDB.  And as usual, there are 2 ways to do this.  I’m going to show you both of them just for completeness.

Method 1:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.KillAllProcesses("SeansDB")}

Now, those of you who know PS, know this is actually quite wasteful. What you’re doing is getting a list of all the DBs, then filtering it down to one and then running the method.
And of course since you call the method with the DB name to begin with this is actually useless. However, it’s still a legal method so I wanted to mention it.

Method 2:You’ve got to think about what you’re doing here so you can make the right decision.  When you do this in sys.sysprocesses, you’re working at the server-level, so the only place individual DBs come into play is with the result set of whatever cursor you write to limit your results. And quite often killing all the spids in a DB can be very useful. So here’s a better way to go about it in PS.

(dir).KillAllProcesses("SeansDB")

This is easy to remember, and easy to type. And of course, it’s obvious what it does… it kills all the spids connected to SeansDB. Now, you can also kill just a specific spid like this:

(dir).KillProcess(69)

And that’s how PS can bring that long t-sql cursor down to almost nothing. What? What’s that? You want more? You want to be able to kill spids based off of application, or CPU, or some other criteria? Well, now you’re just being demanding, but I think PS can do something for you. This is just going to take an extra step and here’s what it’ll look like.
In our example here let’s kill spids by application. So we’ll kill all users who are connecting through SSMS.

$a = (dir).EnumProcesses()
$a | ?{$_.Program -match "SQL Server Management Studio"} | %{$_.KillProcess($_.SPID)

Now, there’s a treasure trove of stuff in here. Let’s take a look at this in detail, especially the EnumProcesses() method.
This method is overloaded so you’ve got some nice functionality built-in.
Here are the documented overloads:

$a = (dir).EnumProcesses() ## Gets all processes.
$a = (dir).EnumProcesses($False) ## Excludes system processes.
$a = (dir).EnumProcesses(69) ## Get info on a single spid... 69 in this case.
$a = (dir).EnumProcesses("domain\user") ## Gets processes run by a specified login.

And now that you’ve got your list of processes, you can do a get-member on them to see what properties are available to you. So remember above when we killed spids by Program? Well, you can kill by anything returned as a property from your get-member. Here’s a screenshot of the results I got from mine.

Killing processes in powershell is so easy it’s almost makes me feel stupid for documenting it. And while you may be used to doing things like this in T-SQL, give this a try and I think you’ll find you like it much better once you get used to it.
And I mentioned that it scales to multiple boxes really well, and it does. I just didn’t show you that here cause it’s pretty easy to figure out.

And DO use this with care. It’s so much easier to kill everything on the box in PS than it is with T-SQL. And I’m not taking any responsibility for how you (mis)use this knowledge.

Checking for Active cluster node

Ok, I was on this blog today and it showed a cool Powershell method for checking which cluster node is active.  And while there’s nothing wrong with the script, it does go to show the thing I dislike the most about how many DBAs work with Powershell in SQL Server.  Now, I can’t stress enough that I’m not picking on this guy, and I’m not saying he’s wrong.  I’m just saying that I prefer to rely on the built-in methods of the provider because it’s just simpler.  Here’s what I mean.

I’m going to borrow his code for a moment just to show you an example.  But I encourage you to visit his blog yourself as he’s got some cool stuff out there.  Anyway, here’s his method for doing this in PS:

# Set cluster name
$cluster_name = "ClusterName";
# Load SMO extension
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null;
$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $cluster_name;
# Get server properties
$properties = $srv.Properties
$owner_node = $properties.Item("ComputerNamePhysicalNetBIOS").Value;
$is_clustered = $properties.Item("IsClustered").Value
if($is_clustered)
{
	Write-Host "The current active node of $cluster_name is $owner_node.";
}
else
{
	Write-Host "$cluster_name is not a clustered instance of SQL Server.";
}

Overall, there’s nothing wrong with that except it’s just too long.  See, too many DBAs forget what Powershell is all about so they’re constantly re-inventing the wheel.  I like to use the kind of wheels I’m given whenever possible.

Here’s how I would tackle this same task (there are actually 2 ways):

Method 1

>$a = dir

>$a.ComputerNamePhysicalNetBIOS

Method 2

>(dir).ComputerNamePhysicalNetBIOS

And that’s it.  Why would you want to type all that other stuff when you can so easily type just a few characters… 33 to be exact. 

All you have to do is make sure you’re in the server node of the provider tree. 

So if you connect through sqlps, then you’ll right-click on the server itself and that’ll take you to the Default instance.  Then you just need to go one level up by typing ‘cd ..’  From there, just type one of the lines above and you’re golden.

Oh yeah, his script also told you whether the box was even clustered to begin with.  We can handle that with another line.

>$a.IsClustered

That returns ‘True’ so if you want to make it pretty for yourself then just a quick change can make that happen.

>IF ($a.IsClustered) {“It’s clustered alright.”} ELSE {“No it isn’t.”}

And strictly speaking you don’t need the ELSE in there.

Now, using both methods you can easily cycle through a ton of boxes so there are no worries there.  

I’m gonna get on my soapbox for a minute and say that this is the bulk of the PS I see being taught and to me it shows a lack of fundamental understanding of what PS is supposed to do for us.  I just prefer to use what the provider gives me.  And it really matters too.  If you find yourself somewhere without your scripts, which happens to me all the time when I’m working from a user’s box, or at another shop, then you’ve got to remember all that SMO provider info.  My method is not only much easier to remember, it’s much easier to investigate because you can easily do a GM against $a to see all the properties it has.  But if you go even just a couple weeks without typing in that code to load the SMO provider, you can forget nuances and you’ll find yourself looking something up.  And that’s not efficient DBA work.  PS is supposed to make our jobs easier, not complicate them.  PS isn’t just a replacement for VBScript.  It’s an entirely new way of thinking.  And there are tons of guys out there teaching PS who haven’t switched to the PS way of thinking.  To me this is the exact same thing as those guys who use those really complicated cursor methods for finding log space usage when they could just type DBCC sqlperf(logspace).  Can you do it the other way, sure, but why would you when the SQL team has given you such an easy method?  This is why most of the time whenever I see a PS blog somewhere, I typically have to translate it into real PS.

So guys, let’s get rid of all that needless SMO code and bring PS back to what it’s supposed to be… simple one-liners that make our job easier.

I also did a video last week about changing server-level properties that talks you through the same methods as well.  Take a look.

http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

T-SQL Tuesday #016: Get DB Sums With Powershell

I’m recycling this December 2010 post for T-SQL Tuesday #016.

Here’s a cute little piece of code to add up the sizes of all your DBs on a server. It comes in handy when you’re trying to see how much space you’ll need to duplicate the server either for DR, or reporting, or even just to see how much backup space you’ll need.

Make sure you’re at the DB context of PS:

>SQLSERVER:\sql\ServerName\default\databases

Then type this code:

>dir | %{$Total += $_.size}

>$Total

Ok, that’s it. Not too bad, huh? Now, let’s work on making it look like something you can actually work with.

Ordinarily you would prettify this by just dividing it by the measure you want to convert it to. So if you wanted to convert it to GB, then that would look like this:

>$Total/1GB

However, you can’t do that here. Here’s what you need to know.

This output is in MB, and the “/1GB” trick assumes that the output is in bytes. So lets assume that your $Total is 4189811.4375. Remember now, that’s in MB. So here’s what the 1GB trick will yield.

>$Total/1GB

0.00390206597512588

That’s not even close to right. And again, that’s because the 1GB trick assumes your output will be in bytes. So here the best way to convert this to GB is to do the math yourself like this:

>$Total/1024

4091.61273193359

Of course, you can still use one of the tricks to get the number you’re looking for, but since you’re already in MB you have to go 1 measure down to make it come out right.

>$Total/1KB

4091.61273193359

So why is that? Well, because 1KB is only a shortcut for 1024, and from MB, the conversion to GB is /1024. And the problem of course with doing it this way is that it’s deceiving as hell. Anyone else looking at this will think you’ve converted to KB because they may not realize that the original output is in MB. So don’t use this one, I only threw it in there to show you how to come up with the right number. It also shows you something about how the trick works.

Now, you can still use the 1GB trick, you just have to massage the data first. So here we’re going to put it back into bytes and then convert it to GB. This is really just to prove to you that the output for this trick really is assumed to be in bytes.

>($Total*1024*1024)/1GB

4091.61273193359

Personally I’d never do it this way. Sure, it shows you and someone else what’s going on so you can read the code, but that’s also what comments are for.

>$Total/1024 # Convert to GB. Can’t use /1GB because this output is in MB and the /1GB assumes the output to be in bytes.

See, what’s wrong with comments?

Pinging SQL Server in Powershell

It quite often happens that there will be some kind of issue keeping your app from hitting the DB.  Sometimes the issue is with SQL itself, but most of the time there’s something else involved.  The problem is that when it’s reported through the error stack at the app level, it only says that the DB was unavailable and it’s hard to get users to understand that it’s just a generic error that means the app couldn’t talk to the DB for whatever reason.  That reason could be NIC, or cable, OS, switch, router, etc.  There are many reasons why an app wouldn’t be able to get to a SQL box.  And it’s made even worse if it’s intermitent. 

So a good way to handle this is to put yourself a ping script up that will actually query SQL.  Not check that the server is up, or the service is running, but that you can actually ping SQL itself and run a query.  I’ve done this for you in powershell.  It runs a simple select and then writes a result to a txt file.  I setup an agent job to run every 5secs and run this code from a dos cmd task.

##C:\SQLPing.ps1

add-pssnapin sqlservercmdletsnapin100

$date = get-date

$a = Invoke-Sqlcmd -ServerInstance Servername -Database master -Query “select @@servername” -ErrorAction silentlyContinue -ErrorVariable err
if ($err.count -eq 0) {$a = “OK”}
else {$a = “Failed”}

# if (!$a) {$b = “Failed”}
“$date  :  $a” | Out-File c:\VI2conn.txt -append

This kind of script can help in a number of ways, and depending on the level of troubleshooting you want to do, you can place this on different boxes.  So you can place it on the local SQL box to have it test itself, or in another data center to test a specific link, or just in the same subnet or vlan, or put it in the same subnet as the app and then in a couple different ones.  How you spread it around depends on what you’re looking for.  But this can be a tiebreaker because if the apps people insist that SQL is down because of a generic error message, you can tell them that 2 boxes from 2 other subnets were able to ping it every 5secs, but the one from the app’s subnet failed 6 times or whatever.  This way you can also show something solid to the network guys and make it easier for them to find the problem on their end. 

Now, this won’t tell you if it’s a switch, firewall, NIC, OS, or what, but it will tell you that it’s not a problem with SQL itself or the SQL box.  Of course, if a couple of them fail then it could still be the SQL box cause again, it won’t tell you that either, but it could be a tiebreaker in the right circumstance.

AND, just because I’m a nice guy here’s the job script too.  I did this in SQL2K8.

USE

[msdb]GO

/****** Object: Job [SQLPing] Script Date: 02/04/2011 14:20:44 ******/

BEGIN

DECLARE

SELECT

TRANSACTION @ReturnCode INT @ReturnCode = 0/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 02/04/2011 14:20:44 ******/

IF

NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)BEGIN

EXEC

@ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’IF

(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollbackEND

DECLARE

@jobId BINARY(16)EXEC

@enabled

@notify_level_eventlog

@notify_level_email

@notify_level_netsend

@notify_level_page

@delete_level

@description

@category_name

@owner_login_name

IF

@ReturnCode = msdb.dbo.sp_add_job @job_name=N’SQLPing’, =1, =0, =0, =0, =0, =0, =N’No description available.’, =N'[Uncategorized (Local)]’, =N’sa’, @job_id = @jobId OUTPUT(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback/****** Object: Step [Ping] Script Date: 02/04/2011 14:20:45 ******/

EXEC

@step_id

@cmdexec_success_code

@on_success_action

@on_success_step_id

@on_fail_action

@on_fail_step_id

@retry_attempts

@retry_interval

@os_run_priority

@command

@flags

@ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Ping’, =1, =0, =1, =0, =2, =0, =0, =0, =0, @subsystem=N’CmdExec’, =N’powershell “c:\SQLPing.ps1″‘, =0IF

(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollbackEXEC

@ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1IF

(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollbackEXEC

@enabled

@freq_type

@freq_interval

@freq_subday_type

@freq_subday_interval

@freq_relative_interval

@freq_recurrence_factor

@active_start_date

@active_end_date

@active_start_time

@active_end_time

@schedule_uid

@ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N’Every 5secs’, =1, =4, =1, =2, =10, =0, =0, =20110204, =99991231, =0, =235959, =N’0b2e4594-92bc-438c-8311-d40076b53042′IF

(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollbackEXEC

@ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’IF

(@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollbackCOMMIT

GOTO

TRANSACTION EndSaveQuitWithRollback:

 

EndSave:

GO

IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

Get DB sums with Powershell

Here’s a cute little piece of code to add up the sizes of all your DBs on a server.  It comes in handy when you’re trying to see how much space you’ll need to duplicate the server either for DR, or reporting, or even just to see how much backup space you’ll need.

Make sure you’re at the DB context of PS:

>SQLSERVER:\sql\ServerName\default\databases

Then type this code:

>dir | %{$Total += $_.size}

>$Total

Ok, that’s it.  Not too bad, huh?  Now, let’s work on making it look like something you can actually work with. 

Ordinarily you would prettify this by just dividing it by the measure you want to convert it to.  So if you wanted to convert it to GB, then that would look like this:

>$Total/1GB

However, you can’t do that here.  Here’s what you need to know.

This output is in MB, and the “/1GB” trick assumes that the output is in bytes.  So lets assume that your $Total is 4189811.4375.  Remember now, that’s in MB.  So here’s what the 1GB trick will yield.

>$Total/1GB

0.00390206597512588

That’s not even close to right.  And again, that’s because the 1GB trick assumes your output will be in bytes.  So here the best way to convert this to GB is to do the math yourself like this:

>$Total/1024

4091.61273193359

Of course, you can still use one of the tricks to get the number you’re looking for, but since you’re already in MB you have to go 1 measure down to make it come out right. 

>$Total/1KB

4091.61273193359

So why is that?  Well, because 1KB is only a shortcut for 1024, and from MB, the conversion to GB is /1024.  And the problem of course with doing it this way is that it’s deceiving as hell.  Anyone else looking at this will think you’ve converted to KB because they may not realize that the original output is in MB.  So don’t use this one, I only threw it in there to show you how to come up with the right number.  It also shows you something about how the trick works.

Now, you can still use the 1GB trick, you just have to massage the data first.  So here we’re going to put it back into bytes and then convert it to GB.  This is really just to prove to you that the output for this trick really is assumed to be in bytes.

>($Total*1024*1024)/1GB

4091.61273193359

Personally I’d never do it this way.  Sure, it shows you and someone else what’s going on so you can read the code, but that’s also what comments are for.

>$Total/1024 # Convert to GB.  Can’t use /1GB because this output is in MB and the /1GB assumes the output to be in bytes.

See, what’s wrong with comments?

FileTable is here

FileTable is another new feature in Denali.  FileTable is the next generation of FileStream.  What this allows you to do is copy files into specified folders on your server, and they automatically get stored in SQL.  So you define a FileTable in SQL and point it at a folder.  Now, anything you put in that folder, shows up in the DB itself.  You can still view the files directly from the file system, only when you do, you’re actually reading them out of the DB.  So we’re really starting to obscure the line between DB and Windows. 

This is one of those features that’s really cool.  To be able to drag a group of files into a folder and have them automatically added to the DB is really cool.  Not only can you add/delete them, but you can also modify them.  You can set file properties using t-sql and it’ll persist that change to the file system.  So now I have to wonder what the future of powershell will be because working with a large amount of files is something we do with powershell the most.  It’s one of the things powershell does really well.  But now that we’ve got this, if we’re able to define a folder as a DB table, then we don’t have to use powershell for these types of file ops anymore… we can just use set-based t-sql. 

The possibilities are churning in my head.