Tag Archives: Query techniques

Truncating your Log Files

I want to show you a great piece of code to truncate and shrink all your log files. The biggest question you always ask is why should you shrink your log files? It’s been drilled into everyone’s head that shrinking log files is bad because you can cause too many VLFs, and of course there’s the zeroing out that happens when they grow.
OK, so let’s answer that question. There are a couple reasons you’d want to shrink all the files to a small size.
1. Let’s say you’ve got to move some logs to another drive, or all of the logs. If left unchecked your logs may have occasion to get rather big. So you could honestly have some pretty large log files lying around. And when you’ve got a downtime to move these files, you don’t need to extend it by copying really large (yet mostly empty) log files. So it’s best to shrink the files so you can copy them over quickly.
2. Also, if you’re trying to restore a prod DB to dev or QA and you don’t have the same kind of space over there then you’ve got to shrink the file. Why try to restore a 90GB log file that’s only 1% full? So shrink the log, take the backup, and then it’ll restore.
3. And of course if you’ve got several logs on the same drive and one of them gets blown up to an enormous size, it may leave the drive too full for the others to expand. There’s no reason to keep a 50GB or 90GB file mostly empty just because you’re afraid of a VLF problem you probably won’t even have.

So there are 3 reasons you may need to truncate log files. Now here’s the script to actually make the change. It’s simple but it gets the job done.
This is one of those times when a cursor is the perfect solution for the job. I know it’s popular to bad-mouth cursors but a lot of times they’re perfect for this type of admin work. However, on top of the cursor solution there’s also a different way to do it. The cursor solution will allow you to run the code for each DB. You can also add in some error handling, and other logic if you like. So ultimately it can be built into a more complete solution. But I’m a big lover of code that writes code. So first, I’m going to show you a simpler solution using string building in a query.

1
2
3
4
5
SELECT 'USE [' + DB_NAME(database_id) + ']; ' + 'DBCC SHRINKFILE([' + name + '], 1024)'
FROM sys.master_files
WHERE type_desc = 'LOG'
AND database_id > 4
ORDER BY DB_NAME(database_id) ASC

You can see that it’s a simple script that gets the job done. If you’re running it from PowerShell you can run the results in your PS loop with no problems. You could even write the results to a table and use some other automated process to pick up the statements. Or, in cases like this, my favorite is to just select and run them by hand.

Now for the cursor version. Again, you can put in lots more logic in here and you have more control over the entire process in general. So they’re both worthy methods to have.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
 
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
---------------------Truncate and shrink all Log Files-----------------------------------------
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
 
/*
The biggest question I get with this is why you would want to shrink all log files.  There's a question of VLFs and log growths, etc.
Well, a simple use case for this is when you need to move a bunch of log files to a new drive and you don't want to be up all night.
Shrink them down and transfer just a few gigs instead of a few dozen or even into the hundreds of gigs.
 
Another reason is to restore a DB to a dev box or something.  If the drive isn't as big as it is on your main box then you'll need to shrink the log so you can actually restore.
Then take the backup.
 
So the fact that it may be good to leave your logs alone for the most part, there are times when it's best to trim them.
Of course, the obvious other reason is space.  If you've got a lot of log files on a single drive then you need the space to be managed a little tighter and if you've got one that got
blown out really big for some reason then there's no reason the others have to suffer because you refuse to shrink it.
 
*/
 
 
DECLARE
@curDBName sysname,
@curFileName VARCHAR(2000),
@SQL varchar(4000),
@FileSize VARCHAR(10);
 
SET @FileSize = '1024'; -- The size you want the files to be shrunk to.
 
Declare DBName Cursor For
	SELECT DB_NAME(database_id) AS DBName, name AS FileName 
	FROM sys.master_files
	WHERE type_desc = 'LOG'
	AND database_id > 4
	ORDER BY DB_NAME(database_id) ASC
 
 
Open DBName 
Fetch Next From DBName INTO @curDBName, @curFileName
 
while @@Fetch_Status = 0
 
	Begin
 
		SET @SQL = 'USE [' + @curDBName + ']; ' 
		SET @SQL = @SQL + 'DBCC SHRINKFILE ([' + @curFileName + '], ' + @FileSize + ')'
 
PRINT @SQL;
--EXEC (@SQL);
 
Fetch Next From DBName INTO @curDBName, @curFileName
END
 
Close DBName
DeAllocate DBName
 
GO

Security Theater

The biggest question I get about the Minion products is about security. People constantly tell me that a shop can’t implement Minion because it enables xp_cmdshell, whereas the SQL Server Maintenance Solution by Ola Hallengren doesn’t, so it’s naturally more secure. So in this post I’m going to show you, using both Minion and Ola’s routines, that what most people consider “more secure” is really just security theater. Now since the names of Ola’s routines change slightly, and the entire solution has kind of a long name, I’m going to refer to them collectively as SQLMaint. As well, I’m going to refer to the Minion suite of maintenance routines collectively as Minion.

For those of you who don’t know, Security Theater is when you have the appearance of security, but in reality you’re not secure at all. Your security measures are just for show. We see this everywhere, and it’s probably one of the main reasons that we’ve got so many data breaches happening all over the world. We’ve got admins making security decisions that they never bother testing, or never bother even questioning so while their intentions are good, they wind up with just the appearance of security. You’ll never be smarter than every hacker on the planet, but don’t get in your own way either.

So here I’m going to compare and contrast the methods that both Minion and SQLMaint use to perform their tasks and see which one is honestly more secure in your environment.

Let’s start by looking at how each product does its job.

Minion: Powershell and xp_cmdshell

First let’s look at Minion. The Minion tools use a combination of Powershell and xp_cmdshell inside SPs. Now, I’ve heard screams from some admins that Powershell isn’t secure because it could allow someone to do something inside SQL or the OS that they don’t have rights to do. This is called privilege escalation and it’s outright false. Powershell is a product like anything else. It has to follow security protocols the same as any other program. That means that there are no hooks inside Windows that notice when a Powershell script is running, and give it extra access. It simply runs under the context of the account that started the script. So Powershell is like any other scripting language from that respect. And if you’re accessing SQL Server from Powershell, then you’re still limited to your rights inside SQL itself. Powershell has no native way to access SQL so there’s no way for SQL to notice that Powershell is trying to connect and give it extra privileges. You access SQL from Powershell from one of 3 different methods: .net, sqlcmd, or invoke-sqlcmd.

Nobody has accused .net of privilege escalation, so making this call from Powershell wouldn’t do it either as you’re literally creating a .net framework object and connecting to SQL through any of those methods. And nobody thinks that sqlcmd gives you any extra rights either. You just connect to SQL and either use your AD account or your SQL account and you get the perms you’d have anyway. And of course, invoke-sqlcmd is just a Powershell wrapper for sqlcmd so there’s no extra security stuff going on in there either.

This is a good time to mention that Powershell is just a SQL client like anything else. SQL itself doesn’t even know Powershell exists. So when you connect to SQL through Powershell, as far as SQL is concerned, it might as well be SSMS, or Excel, or a website, or VBScript, or Python, etc. They’re all just clients and SQL doesn’t do anything special for any one of them. So the idea that Powershell leads to unintended privilege escalation is just that… it’s an idea. But it’s completely false. It has to follow the same security rules everything else does. And believe me, I really wish it would give me the rights to do things my personal account can’t.

Now does that mean that someone can’t fool Powershell into running under a different account so that it has more rights? Of course not. Hackers are smart. They’ve got all kinds of ways to get around things. But Powershell isn’t any less secure than VBScript, which comes on your box by default. So if they exploit a security mechanism they can use that exploit against Powershell or VBScript or even just batch file commands.

Second, the Minion tools use xp_cmdshell. By many admins this is considered to be a huge security hole. Why? Well in short the issue is definitely with privilege escalation. The issue isn’t with what they’re afraid the Minion tools are doing; they’re afraid of what someone else could do with xp_cmdshell enabled. Because with this enabled, someone could fairly easily access the OS layer and do all kinds of things under the SQL service account credentials.

SQLMaint: sqlcmd

SQLMaint works by calling sqlcmd from a job step. This is considered more secure because you don’t have to enable anything special outside of the default security configuration. However, I’m going to show you that this actually isn’t more secure, it can actually be considered less secure. So let’s get into it and I’ll show you what I mean.

xp_cmdshell is limited to sysadmins by default

Ok, we’ll start with xp_cmdshell. Let’s look at how you enable xp_cmdshell and the implications it has.

You implement xp_cmdshell by turning it on in sp_configure. This is an instance-level config so once it’s on, it’s on for all the DBs on that instance. Here’s the code you run to turn it on:

First you have to have ‘Show Advanced Options’ enabled.

Sp_configure ‘show advanced options’, 1

RECONFIGURE

Sp_configure ‘xp_cmdshell’, 1

RECONFIGURE

*Here’s something few people know. You actually only have to specify enough letters to make the option unique. So in the first command above you really only have to do this:

Sp_configure ‘show’, 1

If there were two options that started with ‘show’ you’d have to put enough letters in there to make it unique.

 

Now that it’s on, by default it’s only available to sysadmins. Nobody else can access xp_cmdshell in any way unless you give them access. And what does this mean in reality? It means that sysadmins have rights to do through xp_cmdshell what they’ve already got rights to do anyway. They’re really not getting any extra rights since most DBAs have full rights on the box anyway. And since xp_cmdshell runs under the context of the SQL service, then they’re running under whatever rights it has. This is one of the big reasons why it’s important to follow the least privilege rule for your service accounts. The other reason is because someone who knows the service account password could login under that account and do stuff and have their tracks completely covered. The threat doesn’t have to come from outside the company.

How to grant access to xp_cmdshell for non-sysadmins

You can give non-sysadmins rights to xp_cmdshell, but it takes an extra step. Since you don’t want just anyone running with full access, you have to define a proxy account. This proxy account provides the security context for all xp_cmdshell executions performed by non-sysadmins. This means that your non-sysadmins don’t automatically have unfettered access to anything on the OS or the network, because you’re going to make sure that the proxy account has only the rights it needs. You’re not going to make the proxy account an admin on any of your boxes. Here’s how you create the proxy:

EXEC sp_xp_cmdshell_proxy_account ‘Domain\ProxyLogin,’$$$$###MyStr0ngPassw0rd!@#!@#!!!’

And yes, it has to be an AD account… or at least a local Windows account (I would imagine). And the reason is simple. The only reason for running xp_cmdshell is to access OS-level things. The OS has nothing to do with SQL security so you need to pass it a Windows account. Now you can grant any non-sysadmin execute rights on xp_cmdshell.

The question is do you need to give non-sysadmins access to xp_cmdshell? I have to say that in my 20yrs in SQL, I think I can remember needing this only like once… maybe twice. The point is, that this is a lot like linked servers. The answer is almost always NO, unless there’s such a compelling reason that can’t be gotten around any other way. And that’s almost never. So in all but the strictest of circumstances, xp_cmdshell is only going to be available to your admins who have OS rights to do what they need anyway. Xp_cmdshell just makes it easier for them to code it.

The dangers of sqlcmd

Now let’s look at the method SQLMaint uses to launch its routines. Like I said, SQLMaint calls sqlcmd, which is an OS-level cmdline executable. So you have to have a way to make an OS-level call. And the way this is done is by using the command line job step type. So this job step type allows you to write any cmdline operation you need and it’ll run it for you, you guessed it, under the service account credentials. So by default this functionality is turned on and most people don’t even know it. And if you don’t know it’s there by default then how can you lock it down? The good news though is that only sysadmins have access to this type of job step by default. But anyone who has rights to alter jobs can make changes to the step.

So what does this mean for security vs security theater? Well, with xp_cmdshell you know you’re turning it on and you have to go out of your way to create the proxy and give non-sysadmins rights to it. So at every step of the way you’re aware of what’s going on. But with the job step you’re accepting the default config so anyone with the rights can come in and create a job with this job step and do something they’re not supposed to, or better yet, just alter the current job step in SQLMaint’s call.

Here’s a call I took for SQLMaint’s backup routine directly from its website:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE dbo.DatabaseBackup @Databases = ‘USER_DATABASES’, @Directory = ‘C:\Backup’, @BackupType = ‘FULL'” –b

The above command backs up the user DBs and this is the code that’s inside his command line job step. Now, what if I wanted to be malicious? I could easily open the job and alter it to do this:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “CREATE LOGIN [MyNewSALogin] with password = ‘StrongPassword’; EXEC master..sp_addsrvrolemember @loginame = N’ MyNewSALogin ‘, @rolename = N’sysadmin’; EXECUTE dbo.DatabaseBackup @Databases = ‘USER_DATABASES’, @Directory = ‘C:\Backup’, @BackupType = ‘FULL'” -b

Ok, so I just gave myself sysadmin on this instance. And I know what you’re thinking. You have to have admin rights to be able to make this change. Well, that’s not even close to accurate. Not only can you have job manager perms, but you don’t have to have any of those perms. In fact, a regular user can make these types of changes with very minimal rights. Let me give you a scenario that’s not only possible, but is highly likely.

You have a production box that has lots of jobs on it. You have different groups that need to create, alter or manage these jobs because as the DBA team you don’t want to have to get involved every time one of these many jobs has an issue, or every time they need an update… because it happens a lot. So you give that app team rights to manage jobs. Here’s where the fun begins. There are multiple ways this can go wrong, but I’ll go with the simplest. All your user needs is 3 permissions inside msdb, and here they are:

grant select on sysjobs to [SecurityTest] grant select, update on sysjobsteps to [SecurityTest]

 

I created a SecurityTest user and gave him access to msdb. Then I granted the perms above. Now the only thing the user has to do to recreate what I did above is run a simple update.

update sysjobsteps
set command = ‘CREATE LOGIN [MyNewSALogin] with password = ”StrongPassword”; EXEC master..sp_addsrvrolemember @loginame = N” MyNewSALogin ”, @rolename = N”sysadmin”; ‘ + command
where job_id = ‘0C06625F-F518-4D86-9E5A-063AE8B9C4E4’
and step_name = ‘BackupDBs’

 

He can query sysjobs to get the list of jobs and find the one he’s interested in, and then update sysjobsteps to make the change. Now, the next time that job runs, he’ll have a sysadmin account. He’s then free to cover he tracks by removing the changes, and even give himself a couple backdoors in case someone discovers his new account. This can even include adding this type of code inside of other SPs or jobs so that if his rights are ever removed, they’ll be put back. And depending on how many instances run off of the same SQL service account, he could easily spread his access to every other server in your shop. And he doesn’t even have to be that smart to do it. This isn’t all that clever of a hack.

But you see what’s happened here, right? You wanted to give him rights to manage his jobs and you ended up giving him sa. And he didn’t need that many rights to do it. See, one of the misunderstandings is that the msdb tables are system tables. They’re not. They’re regular user tables, so you can easily update them directly.

Lock down sqlcmd!

But how do you protect against this? Well, the same way you did for xp_cmdshell. You create a proxy account and run those command line job steps under an account with much lesser rights. This way, even though someone might change the code the job runs, it’ll fail because the account won’t have the rights to make the change.

Security is a complicated animal with many facets, twists, turns, and pitfalls. And to say that one solution is more secure than another just because one of them uses a feature that’s turned off by default simply isn’t the case. I hope I’ve proven that turning on xp_cmdshell isn’t inherently bad, and I really hope you understand now that simply leaving it off doesn’t mean that you’re secure. In fact, I’d say you’re at greater risk because you’re not likely to have addressed this issue in your jobs. You’re merely engaging in Security Theater. You have to be a sysadmin to turn on xp_cmdshell and you have to give someone specific rights to run it after creating a proxy. But you could innocently give someone perms in msdb and give them the keys to the kingdom.

So I hope you start looking at the xp_cmdshell issue differently because it’s a wonderful feature that allows you to do some very cool things. And it lets Minion tools give you some really awesome functionality that you wouldn’t have otherwise.

XP_CmdShell isn’t Evil

I’ve been hearing it more and more the past year.
“XP_cmdshell should always be turned off.”
“Whatever you do, don’t turn on XP_cmdshell!”
“We can’t do that, it requires XP_cmdshell!”
“You’ll fail your audit if XP_cmdshell is turned on.”
And all the other variations.

And I suppose I’ve been hearing it more and more lately because Minion Reindex requires it and Minion Backup will require it even more so.

However, I’ll tell you I’m getting pretty tired of hearing it so true to my blog I’m going to rant.
XP_cmdshell has been around forever. And way back in the day, like 15-20yrs ago, it was installed wide open to the public. This is where the problem started. This was back in the day when SQL’s GUI allowed way too many people who had no idea what they were doing to create and manage DBs. That ease of use was a huge part of SQL Server taking hold in the industry. However, with the product being that easy to use, a lot of these untrained DBAs had no idea XP_cmdshell was even there, so their instance was completely vulnerable and they didn’t even know it. Honestly, this was Microsoft’s fault. They should never have packaged up something that dangerous completely open to the public. But you know what, back then they were also installing sa with a NULL password by default too. And Oracle had their scott\tiger username\password combo, so MS wasn’t the only one doing dumb security back then.

However, now XP_cmdshell comes turned off and when you enable it, it’s not open to public anymore. So seriously, what are you still afraid of? I understand that you used to be scared of it because there was no way to lock it down back then. In fact, Microsoft didn’t provide a way to lockdown XP_cmdshell until somewhere in the neighborhood of version 4.2. So back when it was open to public I can see how writing a DENY statement would be really taxing to you as a DBA.
But these days you don’t have any excuses. You have to go out of your way to open it up to public. XP_cmdshell is still really useful and I’m personally able to create many excellent solutions using it… things that would be much more difficult otherwise. And do you know what I tell people who tell me how dangerous it is? I ask them why they don’t lock it down.

Think about it… there are many dangerous features in SQL. And they’re all kept in check by controlling permissions to them. You don’t see anyone screaming that those other features should be allowed on the box because they just say, we use it but we keep its usage controlled pretty tightly. So why doesn’t that apply to XP_cmdshell? Do you think that SQL all of a sudden forgets how to deny execute perms when that gets called? Do you think that SQL honors all security except that one? Do you think XP_cmdshell is powerful enough to override SQL security and just do what it wants anyway?
Of course not. So what are you afraid of?

The truth is that XP_cmdshell can do a lot and in the wrong hands it can make a royal mess of things. Then again so can DELETE and UPDATE. So can SHUTDOWN. So can CLR. So can DROP DATABASE. So can Dynamic SQL. And you don’t see anyone saying that all of those should never be allowed on any server for any reason. And I would honestly venture to say that Dynamic SQL has been the cause of far more security breaches than XP_cmdshell ever has. I don’t have any numbers to back me up, but I bet if you look at the number of security issues caused by XP_cmdshell, they’re far out-weighed by other features.

And it’s not like people have to way to get that functionality just because XP_cmdshell is disabled. There are still cmdline job steps and cmdline SSIS tasks. And of course, you’ve got CLR. All of which can be just as dangerous as XP_cmdshell yet they run on systems all the time. And I know what you’re thinking… “But Sean, we control those through permissions so they can’t do anything really bad.” Yeah, so you’re making my point for me. But do you think that if an SSIS guy wanted to do something bad to your box that he couldn’t find a way if he weren’t locked down? Of course he could.

The cool thing about the cmdline task in Agent jobs is that they can be run via proxy. You can setup a proxy user to run that step under so that its Windows perms are limited and it can’t run haywire. You wanna hear a secret? There’s a built-in proxy mechanism for XP_cmdshell too. I could tell you how to do it, but DatabaseJournal has already done such a fine job. So here’s the link to setting up the cmdshell credential.

I don’t want you to just turn on XP_cmdshell on all of your systems for no reason. But I don’t want you to completely rule it out as a solution just because you’re afraid of it. Tell your Windows admins who are afraid of it to mind their own business and stick to what they know. You’re a DBA and it’s time for you to take back your SQL instances. Lock them down. Don’t be afraid to use cool functionality because so many people refused to read the documentation 20yrs ago. You know better now. So go out there and do the right thing. Lockdown XP_cmdshell, but use it.

Priority vs. Weight

Ok, so this is actually Sean writing this time.
I thought I’d hit the ground running with a great topic that came up just this morning.

The issue is how do you prioritize tasks in code? So let’s say you’re going to process a list of DBs for processing, but you also want to do them in a specific order. You’ve got 2 choices really when deciding on how to do this: Priority and Weight.

Priority:
This one is probably the most common. This is say when you have a list of objects and you rank them from say 1-10. In this case 1 is the top priority with 10 being the least. This is how a priority system works. Items move up the list to the top position.

Weight:
A weight system works just the opposite way. You give more weight to items by giving them a higher number. This extra weight makes them more important so they get processed first. So in the 1-10 list, 10 would be processed first because it has more weight.

So what’s the difference between these? Well a priority system I think is easier with fairly static priority code. You’re not going to be making any real changes to the priorities or the objects very often.

A weight system is better when your list and priorities are more dynamic.

Let’s take a look at an example:
You want to process some DBs for something. You pull in the list of DBs and you want them in a specific order. So you put them in #DBs with an Ordering col.
If there won’t be any new DBs in the list for a long time, and you’re not likely to change the order, then a priority system is good for you. However, if you’re going to be adding more DBs, and those DBs will possibly even be higher priority than the current ones, then a weight system is best. The reason is that if you’ve already got the number 1-10 populated, and something new pops up in the list that’s higher priority than anything else, you’d have to set that to 1 and then shift all the others down by 1. So what used to be 1 is now 2, what used to be 2 is now 3, etc. However, with a weight system, you can add as many new items as you like, and you just keep increasing the number. So in this case the new item would be assigned 11 because it’s more important than anything else and has more weight.

Of course if you have to arrange the weights of any of the lower ones it does become just a bit more tedious. In this case you’ll have possibly make another sub-group column. This way you can keep your major groups and then just change priorities or weights in the sub group. So let’s say you’ve got DBs grouped from 1-100 using either method, but we’ll say it’s by weight for now. If you have to make one DB a higher weight by one you’d have to shift everything else up one.
However, if you have that DB in a major group of say 10 DBs, and they all have weights within that group then you only have to change the weights within that one group. So whereas you would before have had to change the weights of 30 DBs, now you only have to change under 10.

I tend to use a weighted system more, but I have nothing against a priority system. And I prefer the 2 group system even more.

Powershell to Excel

I got this question on twitter recently so I decided to blog it real quick. The question is, how do you output data to excel from powershell?

Well, there’s a free function that I’ve used and I’ve been quite happy with it. I’m not going to give a lot of explanation here except to say that when using it on a large dataset I’ve found it to be painfully slow. And by painfully slow I mean several hours. However, using it with a small amount of data isn’t too bad. I like it because it does all the lifting for me. I just output to the function and I’m done. So anyway, here’s the link to the site. The dev does a good job of giving examples.

http://www.lucd.info/2010/05/29/beyond-export-csv-export-xls/

Powershell: Format array elements for SQL queries

Ok, here’s the situation. You need to format an array of values for use in a SQL query. You want the final output to look something like this:

SELECT * from dbo.T1 where col1 IN (1, 15, 17, 23, 55, 67, 88)

The elements in the IN clause are what you got from the array. It doesn’t matter how you got the elements into the array, just that you did. However, for the purposes of this, we’ll just fill the array ourselves.

1
2
$a = 1, 15, 17, 23, 55, 67, 88
$a

So now that we’ve got the elements in the array. For me this is typically the result of a query. I’m usually doing something like this:

1
2
$a = invoke-sqlcmd -query "select ID from SomeTable where col1 = 'current'"
$a

That’s how I typically fill that array. So anyway, this method works really well as long as the values in the array are numbers. See, in the SQL statement they don’t have to be surrounded by quotes so you can just join them.
Here’s how you join array elements into a single string in PS.

1
2
3
4
5
$a = 1, 15, 17, 23, 55, 67, 88
$b = [string]::Join(", ", $a)
$b
 
$Query = "SELECT * from dbo.T1 where col1 IN ($b)"

The JOIN method takes 2 arguments: The first is the element you want to place between each array element, in my case a comma followed by a space, and the array to apply it to.

Like I said above though this doesn’t really help when the elements need to be quoted.
What I need this time is for the SQL query to look like this:

SELECT * from dbo.T1 where col1 IN (‘2/2/2103’, ‘2/3/2013’, ‘2/4/2013’, ‘2/5/2013’)

The problem is that the elements aren’t quoted when they come in. If you wanted you could add the quotes to the query as you get it back from SQL, but there’s no guarantee you have control over that. And let’s be honest, it’s just fun to do it in PS.

Here’s how to use the same JOIN method in PS to surround the array elements in quotes.

1
2
3
4
5
$a = '2/2/2103', '2/3/2013', '2/4/2013', '2/5/2013'
$b = [string]::Join("', '", $a)
$b
 
$Query = "SELECT * from dbo.T1 where col1 IN ('$b')"

Actually, the differences here are subtle. What’s changed is in the Join method, instead of placing a comma between the elements, I’m placing that comma in single quotes. Here’s the result what you’ll get right after you do the JOIN.

2/2/2103′, ‘2/3/2013’, ‘2/4/2013’, ‘2/5/2013

Notice it gave you the quotes everywhere except the outer boundaries. And we took care of that by placing single quotes around it in the $Query line.

Alright, that’s it. You can now surround array elements in quotes if you need to so you can format the array for use in a SQL query.
Good luck.

Offline database size

Ok, so just as an aside, I didn’t find out until later that night that Jen and I posted on the exact same topic as a mere coincidence. She thought I had taken her topic and exploded it into PS, but I didn’t even know she wrote about it until she said something. So here’s hoping we’re on separate topics today…
Oh y, and you can see both of the other posts here: Mine and Hers.

So on to today’s topic. Today I wanna talk about getting the size of offline DBs. I’ve got a use case right now where we’ve got dozens of DBs on a server that are offline and we need to know how much space we’ll gain if we delete them. You can’t get space through normal means because they’re offline. So the next best thing you can do is to get the size from the files themselves. Here you’ve got 2 choices, but only one is really viable.
First, you can go look at the OS and physically see what the file sizes are. If you’ve got a lot this isn’t workable in the least.

However, if you query sys.master_files you can get the size in 8K pages and then do some math to get a good idea of how much space you’ll get back if you were to delete them. Here I’ve broken it down for you by drive but you can just take out the drive col and get the total. Now, this has one big limitation that it doesn’t accont for mount points. You would have to know what the mount points are to do that, and that’s another discussion. If you’ve got straight drives though you can use this as-is.

1
2
3
4
5
6
7
8
9
select 
LEFT(physical_name, 2) as Drive
, sum(size*8)/1024.0 as TotalInMB 
, sum(size*8)/1024.0/1024.0 as TotalInGB
from sys.master_files mf
inner join sys.databases sd
on sd.database_id = mf.database_id
where sd.state_desc = 'offline'
group by LEFT(physical_name, 2)

Log Management Made Easy

Hey guys… I wrote a nice little log management script I thought some of you might like. It doesn’t do anything to your system so it’s safe to run. Well, I’m turning xp_cmdshell on and off so if you don’t like that then this script isn’t for you.
I’m just putting more info into SQLPERF. These are the most common things I need to know when I have a runaway log. Of course feel free to modify it any way you like. Unfortunately, due to when PS came into play, you can only run this on SQL2K8 boxes and above. If you want to run it on lower boxes you’ll have to take out the PS portion of it.

One more thing before I give you the script. You’re responsible for anything you run on your box. So don’t blame me if something happens and you mess something up. Like I said, I believe this script to be safe but only if you manage the xp_cmdshell portion properly. If you run the script as it is now it’ll turn off xp_cmdshell and if you’ve got processes that rely on it then they will fail. So just be warned yet again that this is a risk. And the reason I’m turning it off again is because I don’t want to open up anything on your box that shouldn’t be. But I really like having the extra info so I don’t have to go look it up. I hope you do too.

Some important notes about the script:
1. It gives you the physical location of the log files, so if there’s more than 1 you’ll see more than 1 entry for each DB.
2. The ShrinkCmd has a variable at the top used to control its default. I like to shrink down to 1GB a lot of times, but set this to whatever you like.
3. You can expand this for yourself in ways I can’t do for you. For instance you could tie it to your log backup job to see when the last execution was and if it failed.
4. I’ve added the last log backup date for you… you’re welcome.
5. The nature of how PS works, you have to change the instance name at the top to the current instance you’re working with. There’s nothing I could do about that. I tried to make it as simple as possible.
6. The PS portion relies on xp_cmdshell. However, I turn it on and off for you in the script. If you want it left on, make sure you comment that portion out of the code or you could have stuff that breaks because you just turned on xp_cmdshell.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
declare @LogSize int,
		@Instance varchar(100)
SET @LogSize = 1024
 
SET @Instance = 'localhost\default' -- If default instance then put Default ex: 'MyServer\default'
 
SET NOCOUNT ON
 
CREATE TABLE #LogSpace
(
DBName varchar(100),
LogSizeInMB float,
LogSpaceUsedInPCT real,
Status tinyint
)
INSERT #LogSpace
EXEC ('dbcc sqlperf(logspace)')
 
 
CREATE TABLE ##LogSpacePSTemp
(
DBName varchar(100),
LogBackupDate varchar(25)
)
 
exec sp_configure 'show advanced options', 1
reconfigure
 
exec sp_configure 'xp_cmdshell', 1
reconfigure
 
declare @cmd nvarchar(2000)
		SET @cmd = 'sqlps "cd sqlserver:\sql\' + @Instance + '\databases; $a = dir; foreach($DB in $a){$DBName = $DB.Name; $LogBackupDate = $DB.LastLogBackupDate; invoke-sqlcmd -query ""INSERT ##LogSpacePSTemp SELECT ''$DBName'', ''$LogBackupDate''""" -SuppressProviderContextWarning}"'
 
		--PRINT @cmd
		exec xp_cmdshell @cmd, no_output
 
 
 
select 
LS.DBName
, LS.LogSizeInMB
, LS.LogSpaceUsedInPCT
, D.log_reuse_wait_desc as LogReuseWait
, LT.LogBackupDate as LastLogBackup
, DATEDIFF(mm, LT.LogBackupDate, getdate()) as MinsSinceLastLogBackup
, D.recovery_model_desc as RecoveryModel
, MF.physical_name 
, 'USE [' + D.name + ']; DBCC SHRINKFILE([' + MF.name + '], ' + CAST(@LogSize as varchar(10)) + ')' as ShrinkCmd
from #LogSpace LS
INNER JOIN master.sys.databases D
ON D.Name = LS.DBName
INNER JOIN master.sys.master_files MF
ON D.database_id = MF.database_id
INNER JOIN ##LogSpacePSTemp LT
ON LT.DBName = LS.DBName
ORDER BY LS.LogSizeInMB DESC
 
drop table #LogSpace
drop table ##LogSpacePSTemp
 
exec sp_configure 'xp_cmdshell', 0
reconfigure
exec sp_configure 'show advanced options', 0
reconfigure reconfigure

Finding Triggers in Powershell

I was helping someone out with something so I thought you guys would like to have these 2 short lines of code that can be pretty useful.  Here I’m going to show you how to check your DB for tables that have triggers, and then for tables with no indexes.  Do with them what you will.

First, let’s talk about triggers and how to avoid them and why.

Triggers are a tricky thing to support becuase the complicate things.  They can be nested and you won’t even realize it.  Their firing order isn’t even guaranteed.  You can specify the first trigger to fire, but no ordering after that.  So you could see intermittent perf problems from both of these aspects of triggers.  And the reasons I usually see people putting triggers on tables aren’t even valid.  They need something to happen in another table like a audit or maybe they even need to change a value in the current row.  That’s the one that drives me crazy the most.  If you need to update a value you just inserted, then just change it before you insert it.  Inserts should be as fast as possible and putting an update inside a trigger for a row you just inserted is just stupid.  Put that code in an SP and do whatever you need in params so by the time you get to the insert everything is as it should be.

And if you need to do something in another table, like auditing that an action took place, then again put that code in an SP and just do the 2nd table write after the main insert.  This really isn’t hard to figure out.  In my career I think I’ve seen maybe a couple honest uses for triggers and I can’t even remember them right now.  But pretty much every time you need a trigger, you would be better served by putting several statements into a single transaction in an SP.  At least someone could easily see what’s going on and you can tightly control the order of ops.

So to find all the tables in your DB with triggers, just go to the tables node in PS and run the following code.  Once you get it you can do anything with it you like.

?View Code POWERSHELL
1
dir | ?{$_.Triggers -ne $NULL} | ft schema, name, triggers -auto

Easy, right?

Now, if you do a get-member you’ll see a handful of properties that give you info on triggers.

dir | gm

HasAfterTrigger
HasDeleteTrigger
HasInsertTrigger
HasInsteadOfTrigger
HasUpdateTrigger

And of course you can use those, even for what we’re trying to do here, but it makes the code a little long to be practical.  The problem is you have to compare all of those properties. 

?View Code POWERSHELL
1
dir | ?{$_.HasAfterTrigger -eq $True -OR $_.HasDeleteTrigger -eq $True -OR $_.HasInsertTrigger -eq $True -OR $_.HasInsteadOfTrigger -eq $True -OR $_.HasUpdateTrigger -eq $True} | ft name, triggers -auto

Both of these lines of code do the same thing, but one is just less tedious than the other.

And you don’t have to filter it at all.  You’re free to just look at the raw data if you like:

?View Code POWERSHELL
1
dir | FT schema, name, triggers -auto

Ok, that’s it this time.

Mystery of the missing files

About 3mos ago I had my work box rebuilt.  The encryption software glitched and it just wouldn’t boot.  So the helpdesk girl was able to slave the drive to another box and copy my files to a NAS.  I looked and all the folders were where they should be.  Great.

She then rebuilt the box and when I got it back I copied all the folders back.  I let it rest and I was very pleased that I was able to preserve all my scripts, etc.  A few days passed and I needed a script, so I went to that folder and it was empty.  WTx dude?  I looked and everything was still out on the NAS so I tried to get it off of there but they were missing there too.    OK, now I’m upset.  I’ve just lost most everything I cared about on my box.  It took me a while to accept it but I finally did.

Well, today (this is now several wks later) I finally went to replace one of the things I lost by downloading it again.  When I went to save it though, it said that it already existed and did I want to replace it.  See, I saved it to the same folder as before because I’m OCD that way.  Needless to say it took me about 2secs to figure out what had happened.  Normally I setup my box the way I want right away, but this time I got busy and just forgot.  And as it turns out my files were there, just hidden.  And I didn’t have ‘show hidden files’ checked in folder properties.  Now that I know the problem, how do I fix it because there’s a lot of files and folders to go through.  The answer of course is Powershell.  I got it right on the first try and the code is so small, and so compact, and so easy to come up with, it just screams PS.  And it is the perfect example of what PS is all about.  So here’s the query I used… enjoy

?View Code POWERSHELL
1
dir -recurse -force | %{$_.attributes = 'Archive'}

Now, here’s a brief explanation of a couple of the flags.

-recurse — means to run it against all the subfolders too.
-force — this is a good one because it tells PS to show you all the files.  If they’re hidden you won’t see them unless you use this flag.

So that’s it really.  A single line of code to unhide hundreds of files in dozens of folders and subfolders.  And since this is my first day back after vacation, I’d say I’m starting it off on a decent note.