Category Archives: Admin

SQL Saturday Oklahoma City, now with a Sean and Jen precon!

We’ll be at OKC next month for their SQL Saturday on August 23, and we’re presenting our precon “Becoming an Enterprise DBA” August 22!.  Get your early bird ticket by July 15, and save over 15%!

Precon: Becoming an Enterprise DBA

August 22, 2014, Norman Technology Center, Moore OK

There’s a real and quantifiable difference between a regular DBA and the leader of an enterprise.  Guiding the database effort of a huge enterprise takes business acumen, experience, problem solving, street-smarts, and above all… vision.  And quite often regular DBAs are thrown into big enterprises without knowing what it takes to get the job done.  In fact, most DBAs get thrown into this situation without even a hint that there’s a difference.  And while they do the best they can, they tend to do more harm than good which is why many companies have such a hard time managing their data.  What you need is to change your tactic when dealing with large numbers of servers.

The biggest piece of missing training in databases is enterprise training.  DBAs are often taught syntax and usage of features, but they’re never taught how to actually use them to plan an enterprise, run an enterprise, or cope with changing requirements, manage difficult users or groups, manage growth, manage vendors, etc.  There are so many more requirements to running an enterprise than just knowing how the individual features work.  There are disk and backup requirements, auditing requests, end-user reporting, paranoid VPs, and much more.  And there’s no way to gain this type of business sense on your own.  You have to learn from seasoned pros who have been there many times over.

I’ve built many successful DBA teams in a few companies that were resistant to the change.  And I can show you how to do it too.  Take your daily tasks down to almost nothing while you increase your visibility in your company (in a positive way).

What you will learn:

  1. How to properly manage a large environment.
  2. Managing disk space and growth requirements across hundreds of servers.
  3. Manage end user expectations.
  4. Build a self-service reporting model to keep DBAs from having to field every day requests.
  5. Reduce security management issues by up to 90%.
  6. How to assess your biggest DBA resource hogs to make the most impact in your environment.
  7. Strategies for dealing with vendors and vendor security issues.
  8. How to hire a competent DBA.
  9. How to staff your company appropriately.  Do you know how many DBAs you need?
  10. How to get a bigger raise and make your company pay for training.

And much much more.  If you manage an enterprise or would like to then you can’t afford to miss this seminar.  No other training exists to take you from being a regular DBA to running and leading an enterprise database effort.  And don’t fool yourself; There is a real difference.  Get ready to take your skills and your career to the next level.

 

Powershell to Excel

I got this question on twitter recently so I decided to blog it real quick. The question is, how do you output data to excel from powershell?

Well, there’s a free function that I’ve used and I’ve been quite happy with it. I’m not going to give a lot of explanation here except to say that when using it on a large dataset I’ve found it to be painfully slow. And by painfully slow I mean several hours. However, using it with a small amount of data isn’t too bad. I like it because it does all the lifting for me. I just output to the function and I’m done. So anyway, here’s the link to the site. The dev does a good job of giving examples.

http://www.lucd.info/2010/05/29/beyond-export-csv-export-xls/

What is a rolling outage?

I often have to explain to my clients about rolling outages. See, this comes into play when you’re doing maintenance, mainly reindexing.
On SQL Standard you can’t reindex online, which means you’ll have table locks that are going to block your users. So here’s how this works… Table1 begins its reindexing and the table is blocked. Nobody can access the table at all. The app complains and so do the users. You even start getting complaints that the DB may be down. Then the reindexing finishes for that table and the locks are released and the users can get back in.
Then Table2 starts its reindexing, and the same thing happens. So Table1 was effectively offline, then Table2, then Table3, etc.
You’ve got a rolling outage. Get it?
This is what I call it. I’ve never heard anyone else use it, but when I say it to DBAs they have no problem working out what I mean. It’s a good term.

This is in fact one of the questions I ask when determining whether we should be on std or ent. Because it’s not a normal downtime as the entire DB isn’t offline. Often times the business can sustain a rolling outage at the right time of day. But it really depends on how large the tables and indexes are. Some tables will have outages of a few seconds, while others can take an hour or longer.

What’s the CPU usage on the server? Use #Powershell!!

We have here an easy, handy-dandy way to find out how much CPU is in use on your server. Of course you can find out by opening up task manager, or by running perfmon, but that’s not really the point. We want something quick and easy. We want something we could potentially use for monitoring, alerting, or tracking over time in SQL Server.

Well of course the answer is Powershell!

Open up Powershell right now, and type “Get-WmiObject win32_processor | select LoadPercentage“. That’s how much CPU is in use right now.

Now let’s say that you have a recurring intermittent CPU spike on your SQL Server. How could we use this? Let’s take a look at our script:


CREATE TABLE #PS
(
PSoutput NVARCHAR(2000)
);

DECLARE @posh NVARCHAR(2000);
SET @posh = ‘powershell “Get-WmiObject win32_processor | select LoadPercentage”‘;

INSERT  INTO #PS
( PSoutput )
EXEC xp_cmdshell @posh;

SELECT  CAST(LTRIM(RTRIM(PSoutput)) AS INT) AS LoadPercentage
FROM    #PS
WHERE   PSoutput LIKE ‘%[0-9]%’;

DROP TABLE #PS;

Here’s what we’re doing:

  1. Create a temp table #PS to hold our output.
  2. Save the Powershell command as a variable.
  3. Run the Powershell command using xp_cmdshell (this will have to be enabled using sp_configure), and load the results into #PS.
  4. Select the only numeric value in #PS, trim the spaces off it, and cast it as an integer.

From here, we could do several things.

  • If we wanted to track CPU over time, we could stick this in a job to save that value to a table, and have it run every few minutes.
  • Or, we could set up an alert if the CPU goes over a percentage.
  • Better yet, a hybrid solution would be to track just the instances of CPU over a threshold (say, 90%), and alert if it happens several times.

We have a solution in place (on a server that spikes CPU randomly) that runs sp_WhoIsActive to a table if CPU is above threshold, and alerts if the CPU has been high for 3 samples out of the last 5. It keeps us from running pell-mell to the computer every time an alert happens, because we set it up to gather information automatically.

Of course it would also be useful to see how much CPU that SQL Server itself is using, to compare against the server total. But that’s a blog for another day.

 

Sean McCown
www.MidnightDBA.com/DBARant

Jen McCown
www.MidnightDBA.com/Jen

 

This is a MidnightDBA collaborative blog.

Another connection is already running ‘sp_replcmds’ for Change Data Capture in the current database

I recently got this error message in a CDC scenario. There are 2 places you can discover this. You can look at the job history for your CDC capture job, and you can look in sys.dm_cdc_errors.

Since CDC uses the replication log reader repl rules apply here in that you can only have one process running sp_replcmds at once. And in the case of CDC and repl, that’ll be the log reader.

That said, how did this happen to begin with? Well in my case I think what happened was another DBA noticed that the log had gotten up to 6GB and never came down. Log backups were running every hour so over the course of a couple weeks it should have cleared out all the xact, but for some reason it didn’t. For some reason there were still 6GB in the log that remained active. So what I think happened is the other DBA stopped the CDC capture job and ran sp_repldone to clear things out. Now this is where the mistake came about. He didn’t run sp_replflush afterwards and he didn’t disconnect his session. I suspected this may be the case so all I did was disconnect his session and restart the CDC capture job and it started up again right away.

Now, to my knowledge there wasn’t anything I could have done to know which connection it was because he had run a few other things since then and his inputbuffer was misleading. The only thing that let me know which spid to kill was the fact I knew who probably did something like that.

Anyway, I don’t know how to tell you to find the offending user in a case like this where it’s just left over from a previous query in the same connection. But at least you know what you need to do now so you can make your own guesses as to who it is. If anyone has a better way to find this info I’d love to hear it.

SQL Server 2012 Install Gotcha

For those of you who like scripting your installs as much as I do there’s something you should be aware of when installing 2012 on Windows 2012 R2. I don’t remember coming across it in Windows 2012 RTM, but maybe I’m wrong. All the same though…

When you install SQL 2012 on Windows 2012 R2 you’ll get an error part way through saying that you have to install NetFx3.
Normally you’d do this in PS as part of your script like this:

Add-WindowsFeature net-framework-features 

Unfortunately you’ll get an error when you try to install this by default on a fresh Windows install. Look at the pic below…

WindowsFeatures

You can see that while the net-framework-features are present, the net-framework-core is ‘Removed’. This is actually what you’re looking for. So where do you get it? On the Windows DVD of course. So now your PS code to install this features looks like this… I’m assuming your Windows DVD in in the D:

Add-WindowsFeature net-framework-features -source D:\Sources\sxs

Now that leaves us with a problem. How do we get the Windows guys to install this for us by default when they give us our box? Because I’ve had very limited success getting them to change their image in a lot of places and they don’t want to go in afterward and install anything manually. So here’s what you do.
Get a hold of the Windows media just once. You can pull it from your MSDN account, or from the network share where they keep the Windows install, or just ask them for it. When you get it copy that entire folder to your SQL install folder. It’s only 275MB. Now you can have a copy of it for yourself and you don’t have to bother them about it. And being able to pass in the -source param means you can point it to any directory you like.
So now the top of your PS install script will look like this:

Add-WindowsFeature as-net-framework
Add-WindowsFeature net-framework-features -source D:\Sources\sxs

Now, if you’re doing your install through the wizard you will have to do this step. And I think I’ve heard that setup will complete just fine if you don’t install this feature, but I haven’t confirmed that personally yet. I also suspect that if it does complete manually just fine without it that it may go ahead and complete if it scripted, but it may shoot up an error in the log. And there’s just no reason to create errors when you don’t have to.

I’ll try to find time to test a couple different scenarios and let you know how it goes… you know, in all my copious free time.

Powershell: Format array elements for SQL queries

Ok, here’s the situation. You need to format an array of values for use in a SQL query. You want the final output to look something like this:

SELECT * from dbo.T1 where col1 IN (1, 15, 17, 23, 55, 67, 88)

The elements in the IN clause are what you got from the array. It doesn’t matter how you got the elements into the array, just that you did. However, for the purposes of this, we’ll just fill the array ourselves.

$a = 1, 15, 17, 23, 55, 67, 88
$a

So now that we’ve got the elements in the array. For me this is typically the result of a query. I’m usually doing something like this:

$a = invoke-sqlcmd -query "select ID from SomeTable where col1 = 'current'"
$a

That’s how I typically fill that array. So anyway, this method works really well as long as the values in the array are numbers. See, in the SQL statement they don’t have to be surrounded by quotes so you can just join them.
Here’s how you join array elements into a single string in PS.

$a = 1, 15, 17, 23, 55, 67, 88
$b = [string]::Join(", ", $a)
$b

$Query = "SELECT * from dbo.T1 where col1 IN ($b)"

The JOIN method takes 2 arguments: The first is the element you want to place between each array element, in my case a comma followed by a space, and the array to apply it to.

Like I said above though this doesn’t really help when the elements need to be quoted.
What I need this time is for the SQL query to look like this:

SELECT * from dbo.T1 where col1 IN (‘2/2/2103’, ‘2/3/2013’, ‘2/4/2013’, ‘2/5/2013’)

The problem is that the elements aren’t quoted when they come in. If you wanted you could add the quotes to the query as you get it back from SQL, but there’s no guarantee you have control over that. And let’s be honest, it’s just fun to do it in PS.

Here’s how to use the same JOIN method in PS to surround the array elements in quotes.

$a = '2/2/2103', '2/3/2013', '2/4/2013', '2/5/2013'
$b = [string]::Join("', '", $a)
$b

$Query = "SELECT * from dbo.T1 where col1 IN ('$b')"

Actually, the differences here are subtle. What’s changed is in the Join method, instead of placing a comma between the elements, I’m placing that comma in single quotes. Here’s the result what you’ll get right after you do the JOIN.

2/2/2103′, ‘2/3/2013’, ‘2/4/2013’, ‘2/5/2013

Notice it gave you the quotes everywhere except the outer boundaries. And we took care of that by placing single quotes around it in the $Query line.

Alright, that’s it. You can now surround array elements in quotes if you need to so you can format the array for use in a SQL query.
Good luck.

Offline database size

Ok, so just as an aside, I didn’t find out until later that night that Jen and I posted on the exact same topic as a mere coincidence. She thought I had taken her topic and exploded it into PS, but I didn’t even know she wrote about it until she said something. So here’s hoping we’re on separate topics today…
Oh y, and you can see both of the other posts here: Mine and Hers.

So on to today’s topic. Today I wanna talk about getting the size of offline DBs. I’ve got a use case right now where we’ve got dozens of DBs on a server that are offline and we need to know how much space we’ll gain if we delete them. You can’t get space through normal means because they’re offline. So the next best thing you can do is to get the size from the files themselves. Here you’ve got 2 choices, but only one is really viable.
First, you can go look at the OS and physically see what the file sizes are. If you’ve got a lot this isn’t workable in the least.

However, if you query sys.master_files you can get the size in 8K pages and then do some math to get a good idea of how much space you’ll get back if you were to delete them. Here I’ve broken it down for you by drive but you can just take out the drive col and get the total. Now, this has one big limitation that it doesn’t accont for mount points. You would have to know what the mount points are to do that, and that’s another discussion. If you’ve got straight drives though you can use this as-is.

select 
LEFT(physical_name, 2) as Drive
, sum(size*8)/1024.0 as TotalInMB 
, sum(size*8)/1024.0/1024.0 as TotalInGB
from sys.master_files mf
inner join sys.databases sd
on sd.database_id = mf.database_id
where sd.state_desc = 'offline'
group by LEFT(physical_name, 2)

Log Management Made Easy

Hey guys… I wrote a nice little log management script I thought some of you might like. It doesn’t do anything to your system so it’s safe to run. Well, I’m turning xp_cmdshell on and off so if you don’t like that then this script isn’t for you.
I’m just putting more info into SQLPERF. These are the most common things I need to know when I have a runaway log. Of course feel free to modify it any way you like. Unfortunately, due to when PS came into play, you can only run this on SQL2K8 boxes and above. If you want to run it on lower boxes you’ll have to take out the PS portion of it.

One more thing before I give you the script. You’re responsible for anything you run on your box. So don’t blame me if something happens and you mess something up. Like I said, I believe this script to be safe but only if you manage the xp_cmdshell portion properly. If you run the script as it is now it’ll turn off xp_cmdshell and if you’ve got processes that rely on it then they will fail. So just be warned yet again that this is a risk. And the reason I’m turning it off again is because I don’t want to open up anything on your box that shouldn’t be. But I really like having the extra info so I don’t have to go look it up. I hope you do too.

Some important notes about the script:
1. It gives you the physical location of the log files, so if there’s more than 1 you’ll see more than 1 entry for each DB.
2. The ShrinkCmd has a variable at the top used to control its default. I like to shrink down to 1GB a lot of times, but set this to whatever you like.
3. You can expand this for yourself in ways I can’t do for you. For instance you could tie it to your log backup job to see when the last execution was and if it failed.
4. I’ve added the last log backup date for you… you’re welcome.
5. The nature of how PS works, you have to change the instance name at the top to the current instance you’re working with. There’s nothing I could do about that. I tried to make it as simple as possible.
6. The PS portion relies on xp_cmdshell. However, I turn it on and off for you in the script. If you want it left on, make sure you comment that portion out of the code or you could have stuff that breaks because you just turned on xp_cmdshell.

declare @LogSize int,
		@Instance varchar(100)
SET @LogSize = 1024
 
SET @Instance = 'localhost\default' -- If default instance then put Default ex: 'MyServer\default'
 
SET NOCOUNT ON
 
CREATE TABLE #LogSpace
(
DBName varchar(100),
LogSizeInMB float,
LogSpaceUsedInPCT real,
Status tinyint
)
INSERT #LogSpace
EXEC ('dbcc sqlperf(logspace)')
 
 
CREATE TABLE ##LogSpacePSTemp
(
DBName varchar(100),
LogBackupDate varchar(25)
)
 
exec sp_configure 'show advanced options', 1
reconfigure
 
exec sp_configure 'xp_cmdshell', 1
reconfigure
 
declare @cmd nvarchar(2000)
		SET @cmd = 'sqlps "cd sqlserver:\sql\' + @Instance + '\databases; $a = dir; foreach($DB in $a){$DBName = $DB.Name; $LogBackupDate = $DB.LastLogBackupDate; invoke-sqlcmd -query ""INSERT ##LogSpacePSTemp SELECT ''$DBName'', ''$LogBackupDate''""" -SuppressProviderContextWarning}"'
 
		--PRINT @cmd
		exec xp_cmdshell @cmd, no_output
 
 
 
select 
LS.DBName
, LS.LogSizeInMB
, LS.LogSpaceUsedInPCT
, D.log_reuse_wait_desc as LogReuseWait
, LT.LogBackupDate as LastLogBackup
, DATEDIFF(mm, LT.LogBackupDate, getdate()) as MinsSinceLastLogBackup
, D.recovery_model_desc as RecoveryModel
, MF.physical_name 
, 'USE [' + D.name + ']; DBCC SHRINKFILE([' + MF.name + '], ' + CAST(@LogSize as varchar(10)) + ')' as ShrinkCmd
from #LogSpace LS
INNER JOIN master.sys.databases D
ON D.Name = LS.DBName
INNER JOIN master.sys.master_files MF
ON D.database_id = MF.database_id
INNER JOIN ##LogSpacePSTemp LT
ON LT.DBName = LS.DBName
ORDER BY LS.LogSizeInMB DESC
 
drop table #LogSpace
drop table ##LogSpacePSTemp
 
exec sp_configure 'xp_cmdshell', 0
reconfigure
exec sp_configure 'show advanced options', 0
reconfigure reconfigure