All posts by Sean McCown

I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.

Is your SQL Server in a VM?

It seems that one of the key pieces of info you need when looking into performance problems is whether your SQL box is sitting in a VM or not. This could greatly expand the scope of your troubleshooting. The problem is of course, that Windows has no idea it’s in a VM so there’s nothing really to query inside Windows itself to get this info. Oh, if you’re lucky you can run through the software installed and look for the VM tools installed, but that’s really hit or miss, and not as easy as I’d like.

I’ll show you the way I do it though and you can either adopt it or not. Frankly, I’ll be really interested in seeing if anyone leaves a comment with a better way, cause I just stumbled on this about a yr ago so I don’t know how it’s usually done.

Anyway, we’re going to turn to Powershell for the answer… there’s a shock, huh?
Here’s the simple code it takes to query for a VM instance:

gwmi Win32_BIOS -ComputerName "Server1"

That’s pretty easy, right?
Now for the results. I’ve only got 2 VM packages to test with so if you’ve got something other than VMWare or Hyper-V then you’ll have to devise your own test.

For VMWare, the results will look like this:

Notice the SerialNumber has “VMware” in it? That’s what you’re looking for.
Now, to filter out the boxes in your LAN that are VMs, you have but to filter based on that criteria. It would look something like this:

$a = "Server1", "Server2", "Server3"

$a | %{$bios = gwmi Win32_BIOS -ComputerName $_; If($bios.SerialNumber -match "VMware") {"$_ is Virtual"} 

Ok, so it’s not a tremendously useful example, but it shows you how the simple filter would work.

Now let’s move on to Hyper-V. That output would look like this:

This time notice that it doesn’t say “Hyper-V”, and it doesn’t even say anything special in the SerialNumber. This time you have to look in the Version col. This col has “VRTUAL” in it. It would be nice if it said Hyper-V, but it doesn’t. So now you can apply the same filter above to find your VMs if you’re in a Hyper-V shop.

$a = "Server1", "Server2", "Server3"

$a | %{$bios = gwmi Win32_BIOS -ComputerName $_; If($bios.SerialNumber -match "VRTUAL") {"$_ is Virtual"} 

Now, let’s put them together. If you’re in a mixed shop like I am, you’ll want to know what kind of VM your SQL box is in so you know which VM admin you need to work with when you have problems. So this last script will tell you which VM your boxes are in. Now you can query all of your servers and put that info into a table so you have it always. And don’t forget to run it periodically to make sure nothing’s changed. Because we’re always moving boxes into VMs, and even taking a few back out. So keeping up with the latest can be useful. I run mine once a week.

$a = "Server1", "Server2", "Server3"

$a | %{

$bios = gwmi Win32_BIOS -ComputerName $_; 

if($bios.Version -match "VRTUAL") {"$_ is Hyper-V"}
if($bios.SerialNumber -match "VMware") {"$_ is VMWare"}
if($bios.Version -notmatch "VRTUAL" -and $bios.SerialNumber -notmatch "VMware") {"$_ is Physical"}
		
}

I would loved to have used the SWITCH statement here, but since I’m having to query 2 separate cols it wouldn’t have been the easiest way to do it. However, just in case you get a chance to use it, here’s a quick lesson on SWITCH. It’s like the CASE statement you see in pretty much every language out there. I won’t bother with explaining the details of the syntax because I think it’s fairly self-explanatory. If you do need help though, ping me and I’ll give you a hand, or you could do the responsible thing and bing it on google first.
But here’s a quick example of how you’d use SWITCH.

$a = "hello"

switch ($a)
{
"hello" {"You typed the right word."}
default {"You typed something other than 'hello'."}
}	
}

Again, it’s not a particularly useful example, but it does show you how SWITCH works. It’s really just a CASE statement.

Change your process

This is an excellent example of how you need to be flexible with your processes, even when you’re in the middle of a project.

We started a project to move a DB to a new set of disks. Since the files are large, we probably weren’t going to be able to fit it into a single downtime so we were going to just move one file at a time over the next few weeks until they were all done. Well, due to circumstances out of our control, now they all have to be done at the same time. The problem is that now the file copies are going to take in excess of 6hrs, which is way longer than any downtime they would give us. I know, right? Don’t worry, I’ve got big problems with them forcing us to do operations in a large chunk like this, and then saying we can’t have time to do it. So we were doing our test last week and it did indeed take about 6hrs to copy all the files. And I don’t know why it didn’t hit me before, but why not change the process? The copy process was there because we were going to piecemeal the operation over several weeks, but since that’s gone, then maybe it’s time to come up with a new strategy.

So that’s what I did. My new strategy is an even simpler backup/restore op. All I have to do when I restore is map the files to their new locations and I’m golden. So so far it’s 6hrs to copy, and I know I can backup in 20-25mins. So my restore I’m guessing will be about 30mins (give or take).

Of course, the backup/restore won’t perform that well on its own. You have to tune it so it’ll use the resources to its advantage. This is where knowing how to tune your backups can come into play. And while I often say that tuning backups is quite often frustrating because you can’t use a lot of your resources because you’ve still gotta leave room for user processing on the box, this is one of those times that you can crank it all the way up. Everyone’s going to be offline for this op, so the box is completely mine. I can use every ounce of RAM and CPU on the server if I want. And that’s what I’m going to do. If you’re interested in how to go about tuning your backups, you can look at my recent SQLSAT session on the topic. I did it for SQL #90 in OKC. You can find the vid page for it here: http://midnightdba.itbookworm.com/Events.aspx

So anyway, the point is that just because you’ve come up with a way to do something, don’t set it in stone. If the scope changes in such a way that you can now do it a better way, then don’t be afraid to say “Stop, I’ve got a better way”. You may get some pushback from your peers because the project plan is already done and we need to just move forward with what we’ve got, but that’s when you need to push back and say no, this process was developed for a different circimstance and now it’s a different scenario completely. So this is no longer a viable method.

Good practice saves the day

Ok, a funny thing happened at work today. I had to write a script to move a DB’s files to another drive. Sounds easy enough, huh? Well in theory it was easy, but leave it to me to mess it up. Now, this was just a dry-run so I don’t feel too bad about it, and it gives me a chance to test my process. So, here’s what happened.

My process was this:

1. Run move stmts for all files.
2. Take DB offline.
3. Run PS script that moves the files.

The PS script selects from sys.master_files to get a list of the files to move. Here’s the problem… I’ve already run the move statements so SQL thinks they’re already in the new location. Therefore, the PS script can’t move them from their current location. However, good practice saves the day.

As a habit I always save a copy of sys.master_files to another table in case something happens. Now it’s a simple matter of changing my PS script from selecting from sys.master_files to selecting from sys.master_filesSEAN. Now I’m back in business. That’s the thing I find about good practices; they help you in ways you can’t even predict most of the time.

Here are some other general steps I take to protect against one thing or another:
1. I always try to group like objects in the same schema. It’s just a good idea.
2. I always copy the system DBs when upgrading or patching SQL. Sure I can restore from backup, and I have those too, but if something big happens and I have to completely re-install from scratch it’s much easier to just drop the old system DB files out there than it is to restore from backup.
3. Whenever I’m done working on an important prod system and it’s time to move on to something else, I always close all prod windows and even disconnect from it in object explorer. I don’t want any chance there’s a way to open a new connection to it and do something dumb.

I’d love to give you guys a huge list, and I know there are more, I just can’t think of any right this second.

An interesting log truncation case

Here’s a good one for you guys. I got a call from one of my DBAs today that they’re having trouble with the logs on one server not truncating. They’ve brought the server space to a critical level, and it needs to be fixed. The problem is that the DBA had looked at the log backups and they were fine. The log backup job was running just fine, and there were no active transactions.

I asked if there were any DBs being replicated… no.
OK, I’ll look into it.

So I connect and start my investigation where I always do for log truncation problems.
SELECT name, log_reuse_wait_desc from sys.databases

This tells me exactly what the log is waiting for before it can truncate. It’s really a matter what what your goal is. Some people love the troubleshooting process so much that they want to drag it out as long as they can. So they hunt around looking at different things to try to find what the problem is. They gather evidence and try different things until the problem is solved. I’m not that way though. I want to find the issue ASAP and get on to other things. So if you really enjoy the hunt then keep doing what you’re doing. But if you want to solve the problem quickly, then use the above query and get on with your life.

So anyway, the log was waiting on a log backup. I then checked the log backup job and it had been running just fine all day. So ok, the log is being backed up, but SQL still thinks that it’s waiting on a log backup before it can truncate the log.

This is where a knowledge of SQL can come in. At this point there are only 2 things that could architect this situation. Because all the DBs on the server are in the same boat with their logs filling up. So either the backup isn’t actually running, or it’s running with the copy_only flag.

A quick look at the job properties told me exactly what I needed to know. Looking at the execution time of the different runs, the job finishes in about 10secs. That seems a little fast for me on a server that has like 200 DBs on it. So looking back at the history 2 days ago the job was taking 9mins.

At this point, I knew exactly what the problem was. I looked in the SP that runs the backup, and the line that actually runs the backup had been commented out. Someone was trying to make a simple change and commented out the wrong line in the SP.

Look, I’m not smarter than your average DBA, and I’m not necessarily better educated. What I do however, is follow a predictable pattern of troubleshooting, and actually pay attention to the evidence. Sometimes the evidence isn’t clear and you have to make guesses, but most of the time it is.
I understand that the thrill of the hunt keeps you going, but some things are easy enough that you should just get them done. Why a log won’t truncate is such an easy thing to diagnose it should just be commonplace for you. Search somewhere else for your troubleshooting fix.

Partitioning and the free lunch

Quite often there’s no benefit to a feature if you’re not using it, but this time that’s not the case. Even if you’re not using partitioning you can still take advantage of it in a really cool way.

Let’s say you’ve got a table that you load every night, and you want to move the current data to a new table and truncate the current one. Here’s how you’d do something like that today.

1. Select * into T2 from T1
2. Insert into T2 select * from T1
3. Rename T1 to T2. Then recreate T1.

But let’s face it, none of those choices are really ideal as they take time. Though of those I would choose 1, 3, 2 in order. I don’t like renaming objects because the new object now has a new ID, and it’s hard to track them over time if they keep changing IDs. So those are your choices if you’re not taking advantage of partitioning.

If you ARE taking advantage of partitioning however, you’ve got a different choice. Now you can switch the data out of the current table and into the 2nd w/o any data actually moving at all. And the best part is that you don’t even have to have partitioned your tables at all. That’s right… your tables are automatically partitioned as a single-partition table so you’re using partitioning whether you know it or not. So that means that in order to truncate T1 and move its data to T2 you only have to do this:

alter table T1
switch partition 1 to T2

Now there are some caveats, but as with most things it’s not too bad. The biggest 3 are:
1. The table structures have to be identical.
2. They have to be on the same filegroup. So the partition you’re switching out and the one you’re switching it into have to be on the same filegroup.
3. The target table/partition has to be empty.

There are some others that are advised, but those 3 are the only show STOPPERS. And once you’ve got the data into the new table, you can then move it to a new filegroup, or do whatever with it you like.

So anyway, it’s a wicked cool method for getting rid of your current data even though you’re not officially partitioning your table.

Learning Powershell in Steps

One of the places PS shines is in its ability to let you work the way you want. So often I find someone stagnating in PS because they can’t figure out with “proper” way to do the task. So they stare at the screen for along time, try a few things, and finally give up. The problem with that is that all of a sudden PS is this new thing they don’t understand and they don’t have time to learn it nuances well enough to make it useful. So they get discouraged and don’t even consider it when coming up with solutions to issues. And this is coupled with that fact that sooooooo much of the PS examples out there are by guys who just don’t get it. They post these really long scripts that are written in native SMO when a simple 1-liner will do. Why would you do that to someone? I’ve said many times that if those scripts had been my first intro to PS I wouldn’t have given it a second glance either. To me, these guys can definitely code, but they really don’t *get* what PS is all about.

So here I’m going to help you with what I’m calling the conduit method of learning PS. Because even now when I can’t figure out how to do something in “proper” PS, I still fallback on what I know. I’ve said many times in my SQLSaturday sessions that PS will do pretty much whatever you tell it, and that it’s perfectly compatible with regular DOS commands. And the same thing goes with T-SQL.

Yesterday I needed to take a user account and add it to every DB on a big server. This server had hundreds of DBs on it. Well, of course I went to PS. My goal was to perform the task as fast as I could and go on about my day, so I chose the conduit method instead of the native PS method. Sure, I can call the regular SMO methods and maybe even figure out how that’s done, and figure out the exact params the methods take, etc, but SMO is so poorly documented it’s hard to ferret some of that stuff out. And you know what? Sometimes it’s more important to just get the job done than it is to make sure it’s done in “proper” or native PS.

So what is this conduit method I’m talking about? Well, it’s marrying what you know in T-SQL and using PS only as a conduit for your T-SQL. This usually consists of using Invoke-SqlCmd inside a loop. This way you get the best of both worlds. You get the power of PS and the simplicity of what you already know and you don’t have to rewrite it every time. So again, the conduit method let’s you code in T-SQL, and just delivers it in PS. And it’s a very powerful technique.

OK, so adding a user account to all my DBs. Here’s what the script looked like:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -query "CREATE USER [MyUser] FOR LOGIN [MyUser]"}

So I start at the DB tree in PS. If you run a ‘dir’ command you’ll get a list of DBs. Now, you loop through them, and for each one, you call invoke-sqlcmd and connect to that DB. Now you run the T-SQL inside the -query param. It’s that easy. Now you’ve got a repeatable method for running code against a whole list of DBs on any server and it’ll run whatever is in the -query param.

And if you’ve got code that’s more complicated, you can put it in a file and run it that way. It would look like this:

dir | %{invoke-sqlcmd -ServerInstance localhost -database $_ -InputFile c:\MySQLFile.sql}

With this you can deploy SPs or views, or tables, or whatever you like to every DB. And you can easily alter it to loop across servers too. So even if you’re a beginner you can use this method because it never changes and you can still get the full power out of PS. And this is the point isn’t it?

So many big PS guys completely miss the point that PS is supposed to make our lives easier. They insist that things have to be done a certain way and frankly I’m getting tired of all the little pissing contests that take place. Oh you have to do it this way because it performs better. Or you have to do it this way because it’s more elegant. And while I agree with those to a certain degree, there’s nothing more elegant than getting the job done and not having to spend all day doing it. I’ve had arguments with guys about scripting DB objects and they insist that the native SMO is faster than the way I do it even though mine is a lot less code. And I’ve tested it, and native SMO is faster. On one of my DBs that had several thousand objects the SMO came in like 15secs sooner than the shorter way. Wow, 15secs, really? What will I do with all that extra time? Maybe I can use it to finally write that book I’ve been planning on huh? So for most cases, these differences are just pissing contests that I have no use for.I don’t care if scripting an entire DB takes a few extra secs. I’ll spend far more time than that writing the SMO it takes to save that time.

And since a lot of our work is adhoc in nature, it helps us to be able to write short, tight code. And by adhoc I mean they’re requests that we couldn’t have forseen so we have to write something from scratch. I don’t know about you, but I don’t want my hands in raw SMO 4x a day. I’d rather write a few 1-liners and go to lunch.

So don’t let anyone tell you that the conduit method is worthless or even inferior. It’s just as valid a method as writing 2 pages of SMO to do the same thing, and it performs plenty fast enough. Don’t be afraid of PS. Use the conduit method until you can learn more PS and in the meantime you’ll be more productive than you’ve ever been.

And we’ve got lots of PS vids on MidnightDBA.com, so go check them out and we’ll explain all kinds of things to you.

Powershell Homework Answers

Ok, I got some answers from the PS homework, so now it’s time for my solutions. If you didn’t get a chance to look at the assignments, take a look at them here real quick if you like and test yourself:
http://www.midnightdba.com/DBARant/?p=647
As with anything there can be multiple solutions to each problem, and while I may not give every single version of a solution, I’ll cover the major ones as I think of them. At times I’m also going to expand on the solution and show you a complete script you can use in your environment.

1. Get the total size of all the DBs on a server.

Method 1:

DIR | %{ $Total += $_.Size };
$Total

This is a basic method which doesn’t really require much explanation, but if you would like a full discussion of it, you can go here: http://www.midnightdba.com/DBARant/?p=488

Method 2:

DIR | measure -property Size -sum

Again, there’s not much to say about this.

However, as a bonus, here’s a lovely script that will give you some good size info on your DBs, and then give you the sums at the bottom.

param($ServerName = "localhost")
$ServerName

cd SQLSERVER:\SQL\$ServerName\DEFAULT\Databases;

$a = dir;

$Size = @{Label="Size(MB)"; expression={"{0:N2}" -f ($_.Size)}};
$DataSpaceUsage = @{Label="DataSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.DataSpaceUsage/1024)}};
$IndexSpaceUsage = @{Label="IndexSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.IndexSpaceUsage/1024)}};
$TotalSpaceUsage = @{Label="TotalSpaceUsage(MB)"; expression={"{0:N0}" -f (($_.DataSpaceUsage + $_.IndexSpaceUsage)/1024)}};
$SpaceAvailable = @{Label="SpaceAvailable(MB)"; expression={"{0:N2}" -f ($_.SpaceAvailable/1024)}};

$a | FT Name, $Size, $DataSpaceUsage, $IndexSpaceUsage, $TotalSpaceUsage, $SpaceAvailable -auto

#### Now get the total and put it into diff measures.

$Total = ($a | measure -property size -sum)

# $a | %{$Total += $_.size}

$TotalInMB = @{Label="Total(MB)"; expression={"{0:N2}" -f ($_.sum)}};
$TotalInGB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024)}};
$TotalInTB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024/1024)}};

$Total | FT $TotalInMB, $TotalInGB, $TotalInTB

CD C:\

DIR | %{ $Total += $_.Size };
$Total

2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.

$Servers = "Server1", "Server2", "Server3"

$Servers | %{ 
$curServer = $_ ## Current server in the loop.
gwmi win32_service -ComputerName $_ | ?{$_.DisplayName -match "SQL"} 
} | Format-Table SystemName, DisplayName, StartName -auto

This is one of those that has a few subtle nuance versions, but there’s no need for me to go into each one.
However, here’s my original post on the topic.
http://www.midnightdba.com/DBARant/?p=386

3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.

dir | %{$_.Script() | out-file c:\SP.txt -append; "GO" | out-file c:\SP.txt -append; $_.Name;}

There are a few ways to do this one as well, but again they’re just nuances. And when I say a few ways, I’m really only referring to the proper way to do it in PS, and not the overly complicated SMO versions. While you can definitely script this in SMO, it’s so long I don’t even consider it a solution in comparison to these 1-liners.
As well, I’ve got 2 vids that explain the topic in full. Here’s what you need to know:
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjects/PowershellScriptDBObjects.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjectsWithOptions/PowershellScriptDBObjectsWithOptions.aspx

4. Change the default file location for a group of servers.

$a = dir
$a.settings.DefaultFile = "C:\MyNewFiles"

Ok, while I haven’t officially blogged on this one, there are 2 similar blogs that show you how to get to the level you need to be in order to run this one. So follow those other ones, and you’ll know how to do this one.
http://www.midnightdba.com/DBARant/?p=589
http://www.midnightdba.com/DBARant/?p=601
There’s also my video that shows you how to do this and I think it’s more comprehensive.
http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

5. Cycle through all the jobs on a server and change all their owners to sa.

dir | %{$_.OwnerLoginName = "sa"; $_.Alter(); $_.Name;}

Now, there’s not a lot of explanation here. That’s because I’ve got a vid that explains it all.
http://midnightdba.itbookworm.com/VidPages/PowershellChangingJobOwners/PowershellChangingJobOwners.aspx
Ok, that’s it for the first PS homework assignments. You’ll notice that I assumed you knew where to run these from. I can’t very well teach every single basic every single time. However, I tried to give you resources to find the answers.

Changing user passwords

Most of us have sa and have for a long time, so exactly what it takes for an end user to change his own password isn’t typically a concern for us. And since it came up at work yesterday I decided to make a video about it.
So if you’d like to see what it takes for a user to be able to change his own password, you can see the new vid here:
http://midnightdba.itbookworm.com/VidPages/ChangeUsersPword/ChangeUsersPword.aspx

Over-tuning Backups

This is going to be a short post, but an important one to demonstrate the importance of thinking about what you’re doing. I recently taught a backup tuning session at SQLSaturday 90 in OKC. You can see the session recording here if you like. And I’ve been teaching this session off and on recently so I’m not really sure where this user saw my session, but I had a user write me with a tuning issue and I thought I’d share it with you.

In his email he outlined his restore scenario that he was unable to tune. No matter what he did, the restore didn’t happen any faster, and he would expect to see some improvement. And in fact, his attempts to tune it have only made it take longer. Here’s the code he sent me, and I’ve changed things to protect his environment.

RESTORE DATABASE AdventureWorks
PAGE = '1:200'
FROM DISK = 'c:\AW.bak'
WITH init, FORMAT,
buffercount = 500,
maxtransfersize = 2097152,
norecovery

So there’s nothing wrong with this syntax, and the restore should work just fine. But he’s right, he won’t see any performance gain at all and here’s why. How fast do you think you can restore a single page?
There’s just no way to copy 8KB of data any faster than it already will. And I’m not surprised at all that his restore takes longer. SQL is still gonna spin up all those resources because you’re telling it to. But since you’re only copying a single page, it’s not going to use them. So spinning up the resources won’t do you any good on this small operation and will only serve to cost you extra time. The resource spin-up will only take a few seconds, but on a page restore like this, it can be in the neighborhood of double the time. So it could take the restore from say 6-10secs to like 10-20secs or something like that. So it’s really not a big deal time-wise.

If you came in late to the backup tuning debate, the flags we’re arguing over are buffercount and maxtransfersize.
I’ll let you watch the session above for a full explanation of them, but I’ll go ahead and say here that 500 buffers just doesn’t even make sense. And transferring 8KB worth of data in 2MB chunks doesn’t make sense either.

The point here is this is a definite case where there’s really nothing to even tune. You can only move 8KB so fast. And while I applaud his efforts to apply something he learned in his shop, he needs to put a little more thought into where he applies it. However, he did the right thing. He tested, he saw an issue, and then he emailed me. We’ve discussed it fully and he now understands why he saw what he did. This is how you learn. And even more importantly, he’s out there trying to apply something he learned from a session. He’s not content with leaving things the way they are in his shop, and he’s not content with his current level of skill. You go girlfriend.

And he’s given me permission to blog about this issue so I don’t want any emails about how I screwed over a viewer who came to me for help.

A cleaner way to do dynamic SQL

Here’s something I wrote on SSC a few yrs ago and I thought I’d rewrite it here to make sure it’s fresh on everyone’s mind.
One of the things I hate the most about dynamic SQL is all those single quotes you have to count when building strings. I’m a DBA so we’re not very bright to begin with, and keeping track of the piles of single quotes makes my head spin. And forget about trying to insert something in the middle of all that garbage.
The example we’ll be working with today is a simple query that creates file move statements so you can move a DB with a lot of files to a new drive easily. I’ll throw in another component here in a minute.
So here’s the T-SQL we’ll be working with. It’s not as bad as some I’ve seen, but it’ll get the point across.
SELECT 'alter database CPY modify file (name = [' +
NAME + '] , FILENAME = ''' +
'P:\' + NAME + '\' + physical_name + ''')' + 'GO'
FROM sys.database_files

So you see here we’re stacking up these single quotes inside there so we can get the actual quotes printed in the resultset. Anybody who konws me knows I’m a big fan of code that writes code. And everything doesn’t have to be fully automated. There’s really something to be said for scripting the code generatin and then making minor changes as needed, or just running it by hand. There are a few things I’d rather not run automatically, but just paste the code into the editor and run it manually. Something like this is one of those things. So anyway, let’s go about getting rid of all those single quotes that are stacked up inside there.

Declare @SQ char(1)
Set @SQ = char(39)

SELECT 'alter database BHCSECLPCPY modify file (name = [' +
NAME + '] , FILENAME = ' + @SQ +
'P:\' + NAME + '\' + physical_name + @SQ + ')' + 'GO'
FROM sys.database_files 

Ok, explaining this a little. The magic happens in the 1st 2 lines. Set a var as a char(1), then set that equal to char(39).
in ASCII char(39) is a single quote. So now anywhere you have those double and triple quotes, just replace them with a @SQ. It’s much easier to see where your SQs are supposed to go, and much easier to add stuff in the middle of it.

Notice also that I put GO stmts at the end. It’s not necessary for this operation, but for some it is. And even when it’s not there are just some people who really like to see them in there. This is another one of those semi-automated processes I was talking about. If you want your GOs on a separate line, it just takes a little regex in SSMS. Start by pasting the results of the above query into a separate SSMS query window. Here’s what it’ll look like.

Now we’re just gonna do a little scrubbing in SSMS.

For this we’re going to choose regex (regular expressions) because they’re cool and super useful for these semi-automated functions.
So in your code window in SSMS hit ctrl+H to open the replace window.
Then make sure your settings look like this.

Once that finishes, and it’ll be wicked fast, your results will now look like this:

Now I’ll explain what’s important. You’re searching for the word GO, and replacing it with \nGO.
\n is the regex code for ‘new line’. There’s nothing more to it than that.

And one more thing for completion. If you wanna see what other ASCII chars there are, I’ve got a little script that’ll show you.

/*
Author: Sean McCown
Date: 06/05/2003
Lists all char codes so you can find the code for the char you're looking for.
Tells that char(39) is ', etc.  Very nice to have around.
*/

Declare @Chars Table
	(
	Code varchar(10),
	Char varchar(4)
	)

Declare @i int
Set @i = 0

While @i < 256

	BEGIN
Insert @Chars 
Select 'Char(' + cast(@i as varchar(4)) + ')', char(@i)
Set @i = @i + 1
	END

Select * from @Chars

OK guys… that’s my little corner of the world today. It’s not widely useful but it does come in handy from time to time.