Tag Archives: Tutorials

A Very Heated Argument about Backup Tuning in Minion Backup

A couple weeks ago we here at MinionWare got into a very heated argument that lasted most of the morning and part of the afternoon. The argument was around the backup tuning settings in Minion Backup (MB), and how they should work vs. how they actually work.
The problem came about because Jen was doing some testing for her first MB session at a user group. She came across an issue with the tuning settings when she added the time component to the Minion.BackupTuningThresholds table. She noticed that she wasn’t getting the tuning settings she thought she should get when she was trying to tune for a specific time of day. So naturally she assumed I was stupid and filed it as a bug.

In actuality though it’s doing exactly what it’s supposed to, and it’s following the letter of the Minion Backup law. That law is “Once you’re at a level, you never go back up”. Let me show you what I mean.

Precedence in the Tuning Thresholds table

Take a look at this sample Minion.BackupTuningThresholds table.

TuningThresholds

Ok, in the above table we’ve got some tuning rows. This is a truncated version of the table, but it’s all we need to demonstrate precedence. We’ve got two rule sets here; one for MinionDefault (the row that provides all the default configuration settings), and one for MinionDev (a specific database on my server).

  • MinionDefault is a global setting that says unless the DB has an override, it’ll take its rows from here.
  • MinionDev is the only DB on this server that has an override, so it’ll take its settings from the MinionDev rows.

At the most basic level, the precedence rule states that once there is an override row for a database, that database will never leave that level…it will never default back to the default row. So in this example, MinionDev is at the database level for its settings, so it will never go back up to the more generic MinionDefault row. Once you’re at a level, you stay at that level.

A “Zero Row” for every level

I’m going to explain how these rows work, and why they are the way they are. Notice that for both levels (that is, for the MinionDefault rows, and for the MinionDev rows), there is what we call a zero row. This is where the ThresholdValue = 0. The zero row is especially important for the MinionDefault row, because this is what covers all DBs; it’s quite possible that you could get a database that’s less than your lowest threshold value.

In the above table, the lowest (nonzero) threshold value for MinionDefault is 20GB. That means that no DBs under 20GB will get any tuning values. Without any tuning values, the number of files would be NULL, and therefore you wouldn’t be able to backup anything…they wouldn’t have any files. So setting the zero row is essential.

And, since each DB stays at that level once it’s got an override, then whenever you put in a DB-level override it’s an excellent idea to give that DB a zero row as well. It may be 50GB now, but if you ever run an archive routine that drops it below your lowest threshold, then your backups will stop if you don’t have that zero row to catch it. Did I explain that well enough? Does it make sense?

That’s how the rule is applied at a high level between DBs. Let’s now look at how it’s applied within the DB itself.

“Zero Rows” within the database level

As I just stated above, you should really have a zero row for each database that has an override row (you know, where DBName = <yourDBname>).

Let’s look at MinionDev above. It has a BackupType=All set, and a BackupType=Full set. The All set takes care of all backup types that don’t have backup type overrides. So in this case, the All set takes care of Log and Diff backups, because there’s a specific override for Full. Get it? Good, let’s move on.

Notice that MinionDev has a zero row for the All set, and a zero row for the Full set. This is essential because following the rules of precedence, once it’s at the MinionDev/Full level, it doesn’t leave that level. So again, if there’s a chance that your database will fall below your lowest tuning threshold – in this case it’s 150GB – then the backup will fail, because there are no tuning parameters defined below 150GB. This again is why the zero row is so important: because it provides settings for all backups that fall below your lowest tuning setting.

And, if you were to put in a BackupType=Log override for MinionDev, it would also need to have a zero row. I could argue that it’s even more important there because it’s quite possible that your log could be below your tuning threshold.

So now, our Argument

That’s how the precedence actually works in the Minion.BackupTuningThresholds table. The argument started when Jen thought that it should move back up to the All set if a specific BackupType override falls below its tuning threshold. So in other words, in the above table, she wouldn’t require a zero row for the MinionDev-Full set. Instead, if the DB size fell below the 150GB threshold, she would move it backup to the MinionDev-All set, and take the lowest tuning threshold from there.

She said that it wasn’t in the spirit of the precedence rules to make the setting quite that pedantic. So after hours of arguing, drawing on the board, making our case, sketching out different scenarios, etc… we just kinda lost steam and moved on, because she had to get ready for her talk.

The point is though that this is the way it currently works: once it’s at its most specific level, it stays there. So, if you have tuning settings for specific backup types, you’d be really well served to have a zero row for each one just in case.

And I’ll also note that BackupType is the lowest granularity. So, Day and Time (another config option in this table) have nothing to do with this setting. You need to concentrate on the DBName and BackupType. Everything else will fall into place.

Final Caveat: We break the rule (a little)

Now, I know it sounds like a contradiction, but there is just one place where I break this rule. I call it the FailSafe. With the FailSafe, it’s possible to have specific overrides and still get your tuning thresholds from the MinionDefault zero row. Here’s why:

This is a rather nuanced config in Minion Backup, and it’s fairly easy to get something wrong and wind up without a backup. I didn’t want that to happen. So, if you do something like leave your zero row out for an override level, and your DB falls below your lowest threshold setting, you would wind up without any backup because there isn’t a number of files to pass to the statement generator.

Failsafe says, if you screw up and don’t have a tuning setting available, MB will grab settings from the MinionDefault Zero Row.

In this situation, I kick in the FailSafe mechanism, which pulls the tuning settings from the MinionDefault zero row. At least you’ll have a backup, even if it’s slow.

(That was one of Jen’s arguments: that a FailSafe is a great idea, but she wants it to come from the DB-All set instead of the MinionDefault-All set. I don’t know, maybe she’s right. Maybe that’s more intuitive. I’ll have to think about it. It wouldn’t be that big of a change really. I could walk up the chain. In the above table I could try the MinionDev-All zero row and if that doesn’t exist then I could use the MinionDefault-All zero row. What do you guys think?)

So why not just hardcode a single file into the routine so that when this happens you’re backing up to that single file? The answer is: flexibility. Your MinionDefault zero row may be set to 4 files because all your databases are kinda big and you don’t ever want to backup with fewer than that. So, set your MinionDefault zero row to something you want your smallest DB to use. If that’s a single file, then ok, but if it’s 4 or 6 files, then also ok. That’s why I didn’t hardcode a value into the FailSafe: It’s all about giving you the power to easily configure the routine to your environment.

Takeaways:

  1. The precedence rules are followed to the very letter of the law.
  2. Once a database is configured at a level, it stays there.
  3. The configuration level is specific to DBName, and then (at the next most specific level) to the DBName and BackupType.
  4. Whenever you have database-level override row, always have a zero row for it.
  5. Whenever you have a BackupType-level override, always have a zero row for it.
  6. The FailSafe defaults back to MinionDefault Zero Row, if a level-appropriate setting isn’t available.

Ok, that’s it for this time. I hope this explanation helps you understand the reasoning behind what we did.

Cannot Generate SSPI Context

I woke up pretty early this morning and decided to get something in my dev environment worked out that I’ve been meaning to do for a long time now. I needed to put my main Minion dev box on a domain acct for SQL. It was currently running under local system. So I switched it to use the domain acct SQLSvc. The second I restarted SQL with the new acct I got the dreaded “Cannot Generate SSPI Context”.

Here’s my fix and I’m hoping it’ll help someone in the future.

First let me say that SSPI errors can be caused by a number of things. This one turned out to be an SPN error, but it doesn’t have to be. Among other things it could be:

  1. DNS lookup.
  2. SQL Server time out of sync .
  3. Client time out of sync.
  4. Anything else that keeps Kerberos from working.

 

Now there are also things that can keep the SPN from being created or used. The account could not have the perms it needs to create the SPN in AD, or there could be duplicate SPNs. You can only have 1. Or you could simply have the wrong SPN. Make no mistake, this issue had caused many gray hairs in many DBAs, and I personally always sigh when it comes up because it can be fixed in 5mins or it can take 4 days and 3 MS support techs getting involved. Most of the time though, it’s more straight forward than that. I’m not going to even attempt to make this a treatise on troubleshooting every aspect of this error though. I’m just going to show you how to diagnose the SPN portion of it and quite often that’s the issue so if it doesn’t work for you it’ll at least eliminate the SPN and you can concentrate your efforts on something else.

OK, that’s enough prelim stuff, let’s get to it.

First, we need to see what SPNs we have on the box. There are 2 ways you can do that depending on what rights you have in the domain. I’m going to show you both of them so you can have a visual guide as well as a cmdline guide.

To see what SPNs you have on the box, go to the cmdline and type the following:

SETSPN –L MinionDevCon

What I’m saying here is to list (-L) the SPNs registered for the SQL box I’m interested in. In this case, MinionDevCon.

See below for the query and the results.

SPN1

Also, notice that the SETSPN command isn’t case sensitive.

So above you can see that I’ve got 2 SPNs for my SQL acct on that box.  Now we just have to delete one of them.  However, before we fix this issue, I’ll show you another way to view the same info.

If you have access to AD, then you can also see the SPNs in the computer object properties.  So go to AD and navigate to the computer object for your SQL box.  Again, in my case this is MinionDevCon.  Then go into the properties of that computer object.  You’ll want to go to the Attribute Editor tab.

SPN2

Now find servicePrincipalName and either double-click it or highlight it and hit Edit.  This will show you the list of SPNs for that computer object.

Here you can see my 2 SPNs just like at the cmdline.

SPN3

Deleting the extra SPN:

Deleting the bad SPN is a fairly straight forward operation.  And again I’ll show you 2 ways.

At the cmdline, simply use the SETSPN cmd again and pass in the delete flag.

SETSPN -D MSSQLSvc/MinionDevCon.MIdnight.DBA:1433 MinionDevCon

So again, that’s SETSPN <SPN> <Computername>
The SPN in the cmd above should be exactly the way it was represented in the list when it was queried.  Here’s the command in the query window so you can see what it looks like:

SPN4

Ahhh, see there, my regular acct doesn’t have rights to do this.  I wanted you to see what happens when you don’t have rights.  You will quite often need to get your AD guys to do this for you.  So if you just prepare the cmd and send it to them they’ll be able to make it happen.  Here’s what it looks like when I log in under my domain admin acct.

SPN5

And again, that cmd is simple.  Call SETSPN with the -D flag, which tells it to delete.  Then the exact name of the SPN followed by the name of the server.

Now let’s do the same thing in AD.  So you’re still in your computer object properties like above.  Simply highlight the SPN you want to drop and then click the Remove button.  It’ll put it in the text box above, but just click OK and it’ll be gone.

SPN6

Ok, that’s it.  I didn’t have to restart my SQL service, but I’ve heard of people who have.

Now, this was just a quick tutorial on how to manage SPNs.  This hole can go pretty deep.  Here’s a decent link on MSDN for troubleshooting SPNs.  I don’t think I like their troubleshooting because they don’t really do a good job of showing you the commands, but it’s a good explanation of the problem, what an SPN is, etc.  If I remember correctly it’ll also help you choose the right SPN.

Move DB Files with Regex

Hey guys, I just posted a new vid on how to use Regex to format DB files so you can easily move them.
The situation is this… you’ve got lots of DBs you want to move to a new drive or to a new server, and you need to write the ALTER DATABASE commands to move all the files, then you need to script the move statements at the cmdline like Powershell. The problem is to be effective, you need some code to parse the filename from the rest of the path.
Here I’m going to show you how to use Regex to parse that out so you can easily build the statements you need. I can write the script to move hundreds of DB files in under a minute. Come watch.
http://midnightdba.itbookworm.com/Video/Watch?VideoId=407

And here’s the demo code so you can follow along.
MoveDBFiles

Hey, how about some new Powershell vids?

I’ve been a busy little guy this week.  I’ve posted 4 new videos.

The first one is on cycling the SQL error log from Powershell.  Well, not really, you’re really deploying the solution to other boxes using Powershell.
You can see it here:
http://midnightdba.itbookworm.com/VidPages/PowershellCycleErrorLog/PowershellCycleErrorLog.aspx

The next one is about reading the SQL error log from Powershell.

You can see it here:  http://midnightdba.itbookworm.com/VidPages/PowershellReadErrorLogs/PowershellReadErrorLogs.aspx

The next 2 are a short series on changing DB permissions in Powershell.
You can see them here:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions/PowershellChangeSQLPermissions.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellChangeSQLPermissions2/PowershellChangeSQLPermissions2.aspx

Changing Job Step Properties in Powershell

I read a blog today from my good friend @SirSQL (Nic Cain) where he was talking about how to change retry attempts on all the jobs on a server. Well, technically it’s the job steps that have the retry attempts, not the jobs themselves. And whenever I see something like this, my fabulously huge MCM brain tries to turn it into Powershell. So I put my fingers to the keys and like 10mins later I had a workable script. And while Nic’s solution is perfectly good T-SQL, it’s another one of those things that highlights the power of the shell because I only need 3 lines of code to do it.

The easiest way to do this is through SSMS. Just start PS from the Jobs node.

And that’ll give you a blank SQLPS window that’s already in the jobs node.

Since we’re in SQLPS we don’t need to load the assembly, but I’ll do it anyway to show you how cause you might not be in SQLPS when you do it. It won’t hurt anything to load it again. But that’s one of the things that SQLPS does for you; It loads these assemblies.

Here’s how you load the assembly and set a variable to the JobStep object type.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

Now we have our new $js var (stands for JobStep) we can fill it with the steps of all the jobs on the server. However, first, let’s take a look at the members of our new var.

$js | gm

Now you’ll get a good listing of all the members so you’ll know what you can and can’t do with it.

Towards the bottom you’ll see the property we’re interested in here: RetryAttempts. Now we just have to go through all the steps and change the retries to what we want. Here I’ll change it to 5 just because it’s a nice round number. You’ll be surprised how easy it is to do this. I’ll go ahead and tack it onto the partial script above and this will become our entire script.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Ok, that’s all we need to do to change the RetryAttempts property for all the jobs on the server. But we’re not done talking about this… not even by a longshot.

First, in line 4 notice I call the EnumJobStepsByID() method on each item of the dir. This is how I populate the $js var with all of the job steps. What this line says is list all of the jobs (using dir) and then for each one, get a list of its steps and put it in $js.

Line 5 runs through each of the job steps in $js and actually performs the work of setting the RetryAttempts to our new value. And remember, jobs have an Alter() method, and typically whenever something in PS has an alter method it likes you to use it. if you don’t the changes will take effect in your PS session only and will not be pushed to the server. So call the Alter() method.

Now, I know what you’re saying… how would you do it for only some of the jobs? Because so far we’ve assumed that you want to apply the changes to every job on the server. Well, there are 2 ways to do that and they’re incredibly easy.

First, you can limit the data from the dir cmd in line 4. It could look like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | ?{$_.Name -match "maint"} | %{$_.enumjobstepsbyid()}
$js | %{$_.RetryAttempts = 5;$_.alter()}

Take note of the new line 4. I’ve just added a where clause to the pipeline so now only jobs with the word “maint” in their names will be in the list.
The 2nd way is just as easy, but you do it at the job step level. if you remember from above when we looked at the methods for $js there was a property “Parent”. This is the parent job name for the step. So all you have to do is add the where clause to the $js instead and you’ll achieve the same thing.

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.EnumJobStepsById()}
$js | ?{$_.Parent -match "maint"} | %{$_.RetryAttempts = 5;$_.alter()}

Now the new line 5 reflects our easy change.

There are so many uses for this code it’s incredible. There are plenty of properties to change and so many ways to limit the result set. Let’s say you have a maint routine on all of your servers and you want to alter the command that the 1st job step runs. That’s very easy. Instead of changing the RetryAttempts property, just change it to the Command property like this:

[reflection.assembly]::loadwithpartialname("microsoft.sqlserver.smo")
$js = new-object microsoft.sqlserver.management.smo.agent.jobstep

$js = dir | %{$_.enumjobstepsbyid()}
$js | ?{$_.Parent -match "maint"} | %{$_.Command = "My new code.";$_.alter()}

There’s nothing here Nic couldn’t change his script to do pretty easily, but this is much shorter and easier to read I think. It also has one huge advantage… I can run it on multiple servers without much effort. I’m not going to go down that road in this blog because I’ve posted many solutions with that code in it already so it wouldn’t be anything for you to add that to this script.

So anyway, thanks Nic for letting me use you as the base for this post. Your solution is fine, I just prefer the look and feel of 4 lines of code.

I’ve also got a companion video for this post:

http://midnightdba.itbookworm.com/VidPages/PowershellChangeJobStepProperties/PowershellChangeJobStepProperties.aspx

The answer is No

There are times in a DBA’s life when the answer is No.  And for once I’m not talking about a DBA asking a girl out.  This time I’m talking about requests we get from our users.  I had such a request today.

A manager of another group came up and asked for a generic account that he could use to connect to the DB from an application.  The answer is No.  Sorry dude, but we don’t give out generic account unless it’s absolutely necessary.  And of course, then the insistance sets in that he has to have the generic account because it’s an application.  I said no, the app itself is running under an acct, and I’ll be happy to give that acct the perms.  He said, well it’s not that easy, this is a web app.  And I said well, that changes things significantly, and you’re right it’s not that easy.  He said see, I told you.  I said, it’s even easier.

You see, with a web app you can set the app pool to run under any acct you like, so connecting to a DB under certain credentials is wicked easy.  And it really is.  He said well, we also need to be able to connect using SSMS to be able to run a couple SPs manually.  So we were hoping to be able to use that acct for that as well.  Again, No.  Give me the accts you want to have these perms and I’ll make it happen. 

Now that’s the end of the conversation, but here’s a word for you guys reading this.  There are several ways to make this happen depending on how you layout the group (either in AD or SQL).  The point I’m trying to make here is that sometimes you have to make sure you service your customers in a way that’s best for the company.  They quite often ask for something specific that you can’t give them, but you can give them the equivalent.  Often times users ask for things a certain way because that’s all they know.  They get too caught up in solving a problem so they try to solve the problem the only way they know how… and in this case it was asking for a generic account.  But this is where we as DBAs need to step up and have the courage to guide them.  They may get upset, they may even insist that it be done their way, but they’re not DBAs, and they’re ass isn’t on the line.  It’s our job to make sure that it gets done right.

Now, just for completion here’s a video I did quite some time ago that shows you how to run a website under a specific account.  It’s in IIS 6, but it’ll show you how it’s done, and you should be able to transfer that skill to IIS 7 if you have to.  And if the app has a windows service it’s even easier… just run the service under the account you like and you’re golden.

Powershell Homework Answers

Ok, I got some answers from the PS homework, so now it’s time for my solutions. If you didn’t get a chance to look at the assignments, take a look at them here real quick if you like and test yourself:
http://www.midnightdba.com/DBARant/?p=647
As with anything there can be multiple solutions to each problem, and while I may not give every single version of a solution, I’ll cover the major ones as I think of them. At times I’m also going to expand on the solution and show you a complete script you can use in your environment.

1. Get the total size of all the DBs on a server.

Method 1:

DIR | %{ $Total += $_.Size };
$Total

This is a basic method which doesn’t really require much explanation, but if you would like a full discussion of it, you can go here: http://www.midnightdba.com/DBARant/?p=488

Method 2:

DIR | measure -property Size -sum

Again, there’s not much to say about this.

However, as a bonus, here’s a lovely script that will give you some good size info on your DBs, and then give you the sums at the bottom.

param($ServerName = "localhost")
$ServerName

cd SQLSERVER:\SQL\$ServerName\DEFAULT\Databases;

$a = dir;

$Size = @{Label="Size(MB)"; expression={"{0:N2}" -f ($_.Size)}};
$DataSpaceUsage = @{Label="DataSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.DataSpaceUsage/1024)}};
$IndexSpaceUsage = @{Label="IndexSpaceUsage(MB)"; expression={"{0:N2}" -f ($_.IndexSpaceUsage/1024)}};
$TotalSpaceUsage = @{Label="TotalSpaceUsage(MB)"; expression={"{0:N0}" -f (($_.DataSpaceUsage + $_.IndexSpaceUsage)/1024)}};
$SpaceAvailable = @{Label="SpaceAvailable(MB)"; expression={"{0:N2}" -f ($_.SpaceAvailable/1024)}};

$a | FT Name, $Size, $DataSpaceUsage, $IndexSpaceUsage, $TotalSpaceUsage, $SpaceAvailable -auto

#### Now get the total and put it into diff measures.

$Total = ($a | measure -property size -sum)

# $a | %{$Total += $_.size}

$TotalInMB = @{Label="Total(MB)"; expression={"{0:N2}" -f ($_.sum)}};
$TotalInGB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024)}};
$TotalInTB = @{Label="Total(GB)"; expression={"{0:N2}" -f ($_.sum/1024/1024)}};

$Total | FT $TotalInMB, $TotalInGB, $TotalInTB

CD C:\

DIR | %{ $Total += $_.Size };
$Total

2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.

$Servers = "Server1", "Server2", "Server3"

$Servers | %{ 
$curServer = $_ ## Current server in the loop.
gwmi win32_service -ComputerName $_ | ?{$_.DisplayName -match "SQL"} 
} | Format-Table SystemName, DisplayName, StartName -auto

This is one of those that has a few subtle nuance versions, but there’s no need for me to go into each one.
However, here’s my original post on the topic.
http://www.midnightdba.com/DBARant/?p=386

3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.

dir | %{$_.Script() | out-file c:\SP.txt -append; "GO" | out-file c:\SP.txt -append; $_.Name;}

There are a few ways to do this one as well, but again they’re just nuances. And when I say a few ways, I’m really only referring to the proper way to do it in PS, and not the overly complicated SMO versions. While you can definitely script this in SMO, it’s so long I don’t even consider it a solution in comparison to these 1-liners.
As well, I’ve got 2 vids that explain the topic in full. Here’s what you need to know:
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjects/PowershellScriptDBObjects.aspx
http://midnightdba.itbookworm.com/VidPages/PowershellScriptDBObjectsWithOptions/PowershellScriptDBObjectsWithOptions.aspx

4. Change the default file location for a group of servers.

$a = dir
$a.settings.DefaultFile = "C:\MyNewFiles"

Ok, while I haven’t officially blogged on this one, there are 2 similar blogs that show you how to get to the level you need to be in order to run this one. So follow those other ones, and you’ll know how to do this one.
http://www.midnightdba.com/DBARant/?p=589
http://www.midnightdba.com/DBARant/?p=601
There’s also my video that shows you how to do this and I think it’s more comprehensive.
http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

5. Cycle through all the jobs on a server and change all their owners to sa.

dir | %{$_.OwnerLoginName = "sa"; $_.Alter(); $_.Name;}

Now, there’s not a lot of explanation here. That’s because I’ve got a vid that explains it all.
http://midnightdba.itbookworm.com/VidPages/PowershellChangingJobOwners/PowershellChangingJobOwners.aspx
Ok, that’s it for the first PS homework assignments. You’ll notice that I assumed you knew where to run these from. I can’t very well teach every single basic every single time. However, I tried to give you resources to find the answers.

Changing user passwords

Most of us have sa and have for a long time, so exactly what it takes for an end user to change his own password isn’t typically a concern for us. And since it came up at work yesterday I decided to make a video about it.
So if you’d like to see what it takes for a user to be able to change his own password, you can see the new vid here:
http://midnightdba.itbookworm.com/VidPages/ChangeUsersPword/ChangeUsersPword.aspx

Trace flag 1807

This is something I haven’t had a use for in quite some time but it actually came in handy today.  Of course, this isn’t something you’d want to make a living doing, but you’ll love to keep it in your hat and pull it out every couple yrs or so when the situation arises.

This is all about solving problems.  Here’s the scenario.  A vendor is here and he’s trying to upgrade a DB schema.  The way the thing works is it basically copies the entire DB to new tables and then drops the old ones.  So at the apex of the operation the DB is about double in size.  The problem is that the server doesn’t have enough space to hold double the DB.  So what do you do?

Well, one of your options is to add space to the disk.  The problem there is that you can never just add space, right?  You have to provision it, etc.  So in this scenario this isn’t a good option at all.

Next you could find another SQL box on the same build and move the DBs over there.  You could then perform the upgrade on the other server, and then move the files back when you’re done.  Of course, the problem there is finding another box that doesn’t encroach on someone else’s production.  So you may or may not have this type of box lying around.

You could also think about just moving the files over to like a SAN or another server that has enough space and just attaching them across the wire.  So the way that would look would be for you to move the files to the other server (assume it doesn’t have SQL installed because you don’t need it), then attach the DB from the original server and point it back to the UNC path where the files are now.  Then once the conversion is done, all you have to do is move the files back to the original server and you’re golden.

But wait… there’s a problem.  If you attach files across the network, SQL will run a check that they’re not local and throw an error.  And even if map a drive you’ll get the same result.  So how can we get around SQL checking for network locations?  That’s where trace flag 1807 comes into play.  Just typing a simple DBCC TraceON (1807) and then attaching your files will disable the network check and allow you to attach your files.

Pretty cool huh?

And for those of you who like to see things in action, I’ve got this video of how this all works.

http://midnightdba.itbookworm.com/VidPages/Trace1807/Trace1807.aspx