Tag Archives: Development

Manage Security Centrally

When you’re in an environment where you need to add user accounts to different servers on a regular basis, it can be a pain to manage the requests because you have to connect to each server, and either use the GUI to add the account, or write the statement yourself. Now, this normally isn’t a big deal, but if you have to do it several times a day that extra time it takes to go through the steps can start to get old.
So here I’ve written a stored procedure that lets you manage the process from a single location. Now, I’m going to caution you up front that this SP is run from a server where the service account already has admin rights on the servers you’re targeting, so you need to make sure this SP is locked down. Make sure that only DBAs you trust and who are responsible can run this. And if you can, make sure even the ones you trust can’t alter it because it logs its actions and you don’t want anyone being able to turn the logging off.
Ok, you’ve been warned and I don’t want any lawsuits or emails telling me that someone did something that messed things up in your shop because it’s all on you. And while I’ve tested this and run it in a few locations, make sure you test it yourself first before relying on it for any real production scenario.
The legal stuff is out of the way so let’s get into the details of the SP itself.

CREATE PROCEDURE dbo.AddServerPerms
@ServerName varchar(200),
@Action varchar(10),
@UserAcct varchar(100) = 'domain\SMcCown',
@Run bit = 0
AS
/*
This SP allows you to add/drop a user to sa on any box you like. It makes the process easier than connecting to the server each time.
Of course, you should always be careful about adding sa accts to servers so you should lockdown the use of this SP to only DBAs that you trust,
and who are responsible.
This SP also logs its actions so you can see which accts were added/dropped, when, and by whom.
This is also meant to be run from an acct that currently has sa on the server.
So replace the Execute as stmt with the acct of your choice.
This is the log table that needs to be in place before running this SP.
CREATE TABLE dbo.SASecurityLog
(
ID int identity(1,1),
ExecutionDateTime datetime,
ServerName varchar(200),
AcctName nvarchar(200),
Action varchar(10),
Code nvarchar(1000),
RunBy nvarchar(200)
)
*/
Execute as Login = 'sa';
DECLARE @SQL varchar(400),
@TotalSQL varchar(1000),
@TotalDropSQL varchar(1000),
@DropSpidSQL varchar(200),
@RunErrors nvarchar(max),
@Result varchar(10);
If UPPER(@Action) = 'ADD'
BEGIN
SET @SQL = 'CREATE LOGIN [' + @UserAcct + '] FROM WINDOWS; ALTER SERVER ROLE [sysadmin] ADD MEMBER [' + @UserAcct + '];'
END
If UPPER(@Action) = 'DROP'
BEGIN --Drop
SET @SQL = 'DROP LOGIN [' + @UserAcct + '];';
--------------------------------------------------------------------------
-----------------------BEGIN Get SPIDs------------------------------------
--------------------------------------------------------------------------
--Get SPIDs to kill.
SET @DropSpidSQL = 'SELECT SPID from sys.sysprocesses WHERE loginame = ''' + @UserAcct + ''';';
CREATE TABLE #SPIDs(col1 varchar(1000));
SET @TotalDropSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @DropSpidSQL + '"'
INSERT #SPIDs(col1)
EXEC xp_cmdshell @TotalSQL;
DELETE #SPIDs
where col1 like '%--%'
OR col1 like '%rows affected%'
OR col1 IS NULL;
--------------------------------------------------------------------------
-----------------------END Get SPIDs--------------------------------------
--------------------------------------------------------------------------
 
--------------------------------------------------------------------------
-----------------------BEGIN Drop SPIDs-----------------------------------
--------------------------------------------------------------------------
----You can't drop a login if it's currently logged in, so here's where we
----drop all the SPIDs for the current user first.
----There's more than one way to do this. I chose this one because I didn't
----want to cram all this cursor syntax into D-SQL, and I didn't want to require
----a script on the drive to be managed. While that may be a cleaner way to code
----this solution, this method is more portable and doesn't require any extra setup.
DECLARE @currSPID varchar(100);
DECLARE SPIDs CURSOR
READ_ONLY
FOR SELECT col1 FROM #SPIDs
OPEN SPIDs
FETCH NEXT FROM SPIDs INTO @currSPID
WHILE (@@fetch_status <> -1)
BEGIN
SET @DropSpidSQL = 'KILL ' + @currSPID + ';'
SET @TotalDropSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @DropSpidSQL + '"'
EXEC xp_cmdshell @TotalDropSQL;
--print @TotalDropSQL
FETCH NEXT FROM SPIDs INTO @currSPID
END
CLOSE SPIDs
DEALLOCATE SPIDs
DROP TABLE #SPIDs;
--------------------------------------------------------------------------
-----------------------END Drop SPIDs-------------------------------------
--------------------------------------------------------------------------
END --Drop
--------------------------------------------------------------------------
-----------------------BEGIN Log Action-----------------------------------
--------------------------------------------------------------------------
INSERT dbo.SASecurityLog
(ExecutionDateTime, ServerName, AcctName, Action, Code, RunBy)
SELECT GETDATE(), @ServerName, @UserAcct, @Action, @SQL, SUSER_SNAME();
If @Run = 0
BEGIN
SELECT @SQL AS AcctSQL;
END
--------------------------------------------------------------------------
-----------------------END Log Action-------------------------------------
--------------------------------------------------------------------------
--------------------------------------------------------------------------
-----------------------BEGIN Run Action-----------------------------------
--------------------------------------------------------------------------
If @Run = 1
BEGIN --Run = 1
CREATE TABLE #Results(ID tinyint identity(1,1), col1 nvarchar(2000));
SET @TotalSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @SQL + '"'
INSERT #Results(col1)
EXEC xp_cmdshell @TotalSQL;
--------------BEGIN Get Run Errors------------------------
SELECT @RunErrors = STUFF((SELECT ' ' + col1
FROM #Results AS T1
ORDER BY T1.ID
FOR XML PATH('')), 1, 1, '')
FROM #Results AS T2;
If @RunErrors IS NULL
BEGIN
SET @Result = 'OK';
END
If @RunErrors IS NOT NULL
BEGIN
SET @Result = 'FAILED';
END
--------------END Get Run Errors---------------------------
END --Run = 1
--------------------------------------------------------------------------
-----------------------END Run Action-------------------------------------
--------------------------------------------------------------------------
select TOP(1) @Result AS Result, @RunErrors AS Errors, * from dbo.SASecurityLog ORDER BY ExecutionDateTime DESC;

Now there’s the code for you to use at your leisure. I’m going to discuss some important parts below by line number.
4: You want to be able to use the same SP to add or drop the acct so you can pass the action in.
26: I’ve set this up to run as sa because the service account on the central box has sysadmin on all the servers in the shop. A good scenario for this is when you have a server that’s running a performance app and needs to have sysadmin on all your SQL boxes. I’ll discuss the security implications of this in a section below.

33-39: Depending on the action you take, create the proper syntax.

44-52: There’s more than one way to do these next couple sections of code.  I chose this one because I didn’t want there to be any external setup.  You should just be able to install the SP and start using it.  So here I’m getting the list of SPIDs taken up by the acct on the target server.  You can’t drop a user acct if it’s currently logged in.  So getting the list of SPIDs to delete and then cursoring through them to kill them is what we’re doing here and in the next couple sections.  It’s possible that one of the SPIDs could disconnect and another take it’s place on the server between the time that you get the SPID list and when it gets killed, but it’s unlikely.

66-82: This is the cursor that kills the SPIDs for the current user from the target server.  Again, this could be done a different way and if you really don’t like this then rewrite this section.

90-96: We’re logging the action before we run it.  It’s important to log the action before the SP gets a chance to error out for some reason.

103-108: This is where we actually run the stmt against the target server.  Notice we’re inserting the EXEC stmt into a #table so we can capture the results.  If the stmt is a success it returns NULL so it’s easy to see if it failed.

110-122: If the stmt fails for any reason we captured the error in the previous section so we can put those results into a var and use that to set a status.  The use of STUFF here is how I commonly do this and it requires the ID column, so if you do this on your own, the code is easy to paste into another SP.

128: Return the results back to the user.

Account Security

Managing accounts in an enterprise can be time consuming, and the more servers you have, the more time consuming it can be.  This is why a lot of shops opt for using a single service account for SQL services.  However, this is a very dangerous practice because now you have a single account that runs every server in your network so if there’s a security breach, you have a big problem.  You have many processes running under this account so there are literally hundreds, if not thousands of opportunities for an attacker to do damage.

However, sometimes it’s necessary to run a centralized management server with an account, and give that account sysadmin on each SQL Server.  Quite often 3rd party performance monitors need these elevated rights.  There’s a difference though between having a single account with sa and having all the servers running under the same acct.  In the centralized server scenario, the only processes running on that account are the ones for the application, and often they’ll be running only on the central server.  Therefore, there won’t be any linked servers, jobs, SPs, or anything else running under that account.  So it’s more unlikely that you’ll have a breach against that account.  It can also be locked down so only the DBAs know it whereas often times many people know the service account passwords.  And again, there are many chances for security to get compromised with all of the processes that are likely running under it.

It’s also easier to change the password for that single centralized service account as you don’t have to change it across 100 servers.  So you’re most likely going to be able to adhere to better security principals.

Now, for the record… you should never run all your services under the same user account.  Always take the extra effort to have a separate account for each server.  And at the very least, you can group the servers by application and have a single service account for each application.  It’s not as good as having one for each box, but you’ll at least minimize the damage should you be compromised.

25 things I learned writing commercial software

It’s our job to learn new things.  We’re constantly studying, practicing, refining, etc.  But I’m not sure that I’ve ever learned so much about the different ways people work than I have by writing commercial software.  Now, my free Minion modules don’t cost anything, but that doesn’t mean they’re not commercial software.  They’re released to the public and are becoming quite popular, so they’re commercial in the distribution sense.

And there are things that I’ve learned about SQL and DBAs in general while building these tools.  Here’s a list of some of the things I’ve learned while writing the Minion maintenance modules.  Not all of these were new to me.  Some of them I’ve known for years, but were shoved in my face during this process.  Others I’ve kind of known, and still others never even crossed my mind because I’ve never had to deal with them before.  So come with me on the very rewarding, yet sometimes often quite frustrating journey of commercial software design.

  1. The customer isn’t always right. That may work in the food service industry, but it definitely doesn’t fly in IT.  I’ve found that giving people what they want is a dicey thing because not only do a lot of them not really know what they want, but sometimes they don’t have enough experience to know that what they want isn’t what they need.
  2. Service is key. At MinionWare we pride ourselves on answering support tickets immediately.  We consider it poor service to even let a ticket sit for an hour and in fact most of the time we answer the ticket within 10mins.  I think this is essential because people have to know that their issues are going to be addressed in a timely manner.
  3. You’re proud of your product. You’ve written something that you think everyone will just love.  So you package it up and send it out to the masses.  And as much as you love what your code does for the world, cut that love in half for the public.  Nobody will love your baby as much as you do; at least not in the beginning they won’t.  However, there’ll be some who really get what you do and who love it right away.  Others will take some convincing.  While yet others never get excited about anything.  It’s only DB maintnance dude, how exciting can it be?
  4. People have all kinds of reasons for not adopting your solution. Sometimes it’s warranted, and sometimes it’s just laziness, or not wanting to change.  This is neither good nor bad, it just exists.  Get ready to hear ‘no thanks’ a lot more than you’re used to.
  5. There are so many different configurations and situations people run SQL in that it’s exceptionally difficult to write software to cover all the bases. Minion Backup was more difficult in this respect than Minion Reindex, but there was still some of that for MR.  But there are so many ways people want to add backups to their processes and so many things they need to be able to do that it’s really hard to get it right.  So the question is, have we gotten it right with MB?  Honestly, only time will tell, but I think we objectively did a really good job.  We’ve had some bugs but no major config flaws that I can see.  I think we’re setup well enough for the future of the product.
  6. It has to be as easy to configure as possible. Users don’t like to jump through hoops to make simple changes to software.
  7. No matter what you put in the product, you’ll have forgotten something that someone wants. I forgot to allow NUL backups in MB and a user requested it.
  8. User requests and bug reports are a good thing. It doesn’t necessarily make you a bad coder to have bugs.  You could just have a complicated app with many different complicated situations and you can’t code for everything out of the gate.  But feature requests and bug reports mean that people are using your software and like it well enough to want to see it improved.
  9. That BS you pulled at your last company where the code you wrote was “good enough” simply won’t fly here. Your name is on this code and how embarrassing would it be for someone to comment on a poor portion of your code only for you to have to say that you didn’t feel like doing it right.  Laziness is no excuse for poor coding or design.  Take the time to do it right, even if you have to recode portions several times.
  10. Don’t be afraid to try really outlandish things. IT gets mired in the mundane sometimes.  Turn your product on its ear.  If there’s something that you really want to be able to do, but it seems too hard, or even impossible, then that’s a place for you to shine.  Try sample code for even the most outlandish ideas to accomplish it.  You never know when it’s really not going to be as bad as it seemed.  It may not always work out, but at least you’re trying to tackle the issues people are faced with.  I had a few of these moments in MB.  There are problems we face every day with different backup situations and I wanted to solve.  And I didn’t want to be bound by what’s considered tradition to solve them.
  11. You can’t control who downloads your software. You may have a primarily American market in mind, but you’ll get downloads from all around the world.  Why is this important?  Well, that instantly throws you into different collations, time zone issues, etc.  I got caught in MB by foreign decimals.  I hadn’t counted on that and when I started getting downloads from other countries, backups stopped running because Powershell and SQL handle these decimals differently.  I didn’t know that before I started this.
  12. Test Test Test… then test again. Keep a list of all your edge cases and test every new version against every one of them.  The more you test the better your product.  And formalize it.  Don’t just run it a few times on your laptop and release it to the world.  If you support different versions of SQL then you have to test every feature not only on every one of those versions, but also on all versions of windows they can be installed on.  And if you can, test it against every major service pack.  Microsoft added 3 columns to RESTORE HEADERONLY in a service pack and it broke MB.  It didn’t even cross my mind to test for individual service packs before I started this.
  13. You can’t test for everything. Sometimes there are some ridiculous things that keep your software from being successful and sometimes they’re not anything you could’ve foreseen.  Again, MB has a perfect example.  As it turns out when you’re loading the Powershell SQL module, if you have SSAS installed on the server it has no effect on backups.  However, if you have SSAS installed and the service isn’t started, then it shoots up a warning when you load the provider.  So we found that the warning was taking the place of the data we were expecting and backups were halted.  If you’d have asked me a year ago if having SSAS turned off would affect your backup routine, I would’ve said ‘Hell No’.  Leave it to me to write software that finds this kind of issue.
  14. Every feature request doesn’t have the same weight. I don’t really believe in up-voting feature reqs.  I think if a feature is a good idea then it should go into the product no matter how many people requested it.  Maybe I’ll change my mind when there are 2 million people using my stuff and I’ve got feature reqs coming out my ears, but for now, I look at everything on its merits.  That doesn’t mean though that every request is equal.  I’ve had some pretty ridiculous feature reqs from people who clearly weren’t DBAs and really don’t know the proper way to manage their backups.  These are the requests you don’t give much weight to.  However, this is your opportunity to teach, so help your product shine by showing them the proper way to do things using your product to do it.
  15. Documentation is key. The more you can tell people about your product the more successful you’ll be.  There are people who just won’t read it, but there are others who will comb out every last nugget.  And if you have a particularly sensitive feature, or something that is a little more difficult to configure, then give your reasoning behind designing it the way you did.  Give the use cases for the feature.  This will help people know when to use it and when not to.  And it’ll help them know what’s going on behind the scenes.  The more they know the better everyone is.
  16. You can’t add every feature request.
  17. Use your own software. If you don’t use it, then who will?  And there’s no better way to flesh out bugs, and usability issues.  You should always put yourself in the shoes of your users coming in for the first time.  You’d be really surprised how quirky something you wrote for yourself is.  MB was my private backup utility for years and I had a set of steps I went through to set it up.  I knew them.  I was used to them.  So it didn’t bother me having to do it.  But expecting everyone to go through those steps is ridiculous.  Sometimes you can only make something so easy, but don’t go out of your way to make it hard.  Step out of your own head.
  18. Get plenty of people to test it out for you. This can be hard because you’ve not only got to find someone willing to put beta software on their box, but they’ve got to be the right person.  Building up a group of reliable beta testers can be the difference between life and death.  I’ve had beta testers find some pretty glaring bugs in my software and I’m grateful for each and every one of them.
  19. Seriously ask yourself if you’re actually adding anything to the market. Are you really solving a problem, or putting a really good spin on something? Or just throwing a slightly different version of the same thing out there?  So if you’re not an expert in the field you’re writing the software in, then do some research and find out what already exists and what the biggest issues are.
  20. The internet is a cold, dark place. Writing software is one thing, and getting the word out is another.  You quickly find that coming up with different ways to get the word out isn’t as easy as you’d think.  It takes persistence too.  You can’t just send out a couple tweets and a blog and call it a day.  It takes dedication and a lot of thought to find the avenues that’ll actually tell people about your stuff.  Keep with it though.
  21. Write software with support in mind. Chances are you’ll have to support what you write, and not leaving yourself in a good position will be the death of you.  So make sure you try to anticipate any issues someone could have and write in some debugging mechanisms.  Your customers will love you for it, and so will you.  And don’t make the debug info too hard to get at.  Remember, you’re the one who’s going to use it, so give yourself what you need.  Sometimes your customers will use it and bypass you altogether.  These are the guys we like.
  22. Writing software is one thing, but learning to support it is another. Sure, you may be the genius behind your code, but that doesn’t mean you have experience troubleshooting it.  Sure, you’ve debugged your code many times on your test box, but what about through email with a customer who won’t just let you on his system?  Do you know the right questions to ask?  Do you know the right things to have them to do repro the more complicated issues?  I’ve had to learn how to support my own products and it’s shown me that even my debug mechanisms weren’t what I thought they were.  So I’ve had to improve my debugging scenario and going forward it’ll be first on my mind with every feature.
  23. There’s a fine line between hardcoding things, and having parameters. You can’t hardcode everything, but you can’t have 500 params passed in either.   It’s just too clunky.  So good luck with finding that balance.
  24. Never rest on your laurels. Always be thinking ahead to the next release.  I’ve very happy with the current version of MB and MR, but before the code was released I was already listing enhancements for the next couple releases.
  25. Be honest about your shortcomings. People hate it when you BS them, so don’t even try.  Be honest about how the product works and why.  Not only will people respect you more for it, but you may convert them to your way of thinking.  People who don’t love you won’t love you anyway so you can’t convert everyone, but being honest about your bugs, and your features can go a very long way.  Show them you’re making an honest good-faith effort to write something good.

XP_CmdShell isn’t Evil

I’ve been hearing it more and more the past year.
“XP_cmdshell should always be turned off.”
“Whatever you do, don’t turn on XP_cmdshell!”
“We can’t do that, it requires XP_cmdshell!”
“You’ll fail your audit if XP_cmdshell is turned on.”
And all the other variations.

And I suppose I’ve been hearing it more and more lately because Minion Reindex requires it and Minion Backup will require it even more so.

However, I’ll tell you I’m getting pretty tired of hearing it so true to my blog I’m going to rant.
XP_cmdshell has been around forever. And way back in the day, like 15-20yrs ago, it was installed wide open to the public. This is where the problem started. This was back in the day when SQL’s GUI allowed way too many people who had no idea what they were doing to create and manage DBs. That ease of use was a huge part of SQL Server taking hold in the industry. However, with the product being that easy to use, a lot of these untrained DBAs had no idea XP_cmdshell was even there, so their instance was completely vulnerable and they didn’t even know it. Honestly, this was Microsoft’s fault. They should never have packaged up something that dangerous completely open to the public. But you know what, back then they were also installing sa with a NULL password by default too. And Oracle had their scott\tiger username\password combo, so MS wasn’t the only one doing dumb security back then.

However, now XP_cmdshell comes turned off and when you enable it, it’s not open to public anymore. So seriously, what are you still afraid of? I understand that you used to be scared of it because there was no way to lock it down back then. In fact, Microsoft didn’t provide a way to lockdown XP_cmdshell until somewhere in the neighborhood of version 4.2. So back when it was open to public I can see how writing a DENY statement would be really taxing to you as a DBA.
But these days you don’t have any excuses. You have to go out of your way to open it up to public. XP_cmdshell is still really useful and I’m personally able to create many excellent solutions using it… things that would be much more difficult otherwise. And do you know what I tell people who tell me how dangerous it is? I ask them why they don’t lock it down.

Think about it… there are many dangerous features in SQL. And they’re all kept in check by controlling permissions to them. You don’t see anyone screaming that those other features should be allowed on the box because they just say, we use it but we keep its usage controlled pretty tightly. So why doesn’t that apply to XP_cmdshell? Do you think that SQL all of a sudden forgets how to deny execute perms when that gets called? Do you think that SQL honors all security except that one? Do you think XP_cmdshell is powerful enough to override SQL security and just do what it wants anyway?
Of course not. So what are you afraid of?

The truth is that XP_cmdshell can do a lot and in the wrong hands it can make a royal mess of things. Then again so can DELETE and UPDATE. So can SHUTDOWN. So can CLR. So can DROP DATABASE. So can Dynamic SQL. And you don’t see anyone saying that all of those should never be allowed on any server for any reason. And I would honestly venture to say that Dynamic SQL has been the cause of far more security breaches than XP_cmdshell ever has. I don’t have any numbers to back me up, but I bet if you look at the number of security issues caused by XP_cmdshell, they’re far out-weighed by other features.

And it’s not like people have to way to get that functionality just because XP_cmdshell is disabled. There are still cmdline job steps and cmdline SSIS tasks. And of course, you’ve got CLR. All of which can be just as dangerous as XP_cmdshell yet they run on systems all the time. And I know what you’re thinking… “But Sean, we control those through permissions so they can’t do anything really bad.” Yeah, so you’re making my point for me. But do you think that if an SSIS guy wanted to do something bad to your box that he couldn’t find a way if he weren’t locked down? Of course he could.

The cool thing about the cmdline task in Agent jobs is that they can be run via proxy. You can setup a proxy user to run that step under so that its Windows perms are limited and it can’t run haywire. You wanna hear a secret? There’s a built-in proxy mechanism for XP_cmdshell too. I could tell you how to do it, but DatabaseJournal has already done such a fine job. So here’s the link to setting up the cmdshell credential.

I don’t want you to just turn on XP_cmdshell on all of your systems for no reason. But I don’t want you to completely rule it out as a solution just because you’re afraid of it. Tell your Windows admins who are afraid of it to mind their own business and stick to what they know. You’re a DBA and it’s time for you to take back your SQL instances. Lock them down. Don’t be afraid to use cool functionality because so many people refused to read the documentation 20yrs ago. You know better now. So go out there and do the right thing. Lockdown XP_cmdshell, but use it.

Priority vs. Weight

Ok, so this is actually Sean writing this time.
I thought I’d hit the ground running with a great topic that came up just this morning.

The issue is how do you prioritize tasks in code? So let’s say you’re going to process a list of DBs for processing, but you also want to do them in a specific order. You’ve got 2 choices really when deciding on how to do this: Priority and Weight.

Priority:
This one is probably the most common. This is say when you have a list of objects and you rank them from say 1-10. In this case 1 is the top priority with 10 being the least. This is how a priority system works. Items move up the list to the top position.

Weight:
A weight system works just the opposite way. You give more weight to items by giving them a higher number. This extra weight makes them more important so they get processed first. So in the 1-10 list, 10 would be processed first because it has more weight.

So what’s the difference between these? Well a priority system I think is easier with fairly static priority code. You’re not going to be making any real changes to the priorities or the objects very often.

A weight system is better when your list and priorities are more dynamic.

Let’s take a look at an example:
You want to process some DBs for something. You pull in the list of DBs and you want them in a specific order. So you put them in #DBs with an Ordering col.
If there won’t be any new DBs in the list for a long time, and you’re not likely to change the order, then a priority system is good for you. However, if you’re going to be adding more DBs, and those DBs will possibly even be higher priority than the current ones, then a weight system is best. The reason is that if you’ve already got the number 1-10 populated, and something new pops up in the list that’s higher priority than anything else, you’d have to set that to 1 and then shift all the others down by 1. So what used to be 1 is now 2, what used to be 2 is now 3, etc. However, with a weight system, you can add as many new items as you like, and you just keep increasing the number. So in this case the new item would be assigned 11 because it’s more important than anything else and has more weight.

Of course if you have to arrange the weights of any of the lower ones it does become just a bit more tedious. In this case you’ll have possibly make another sub-group column. This way you can keep your major groups and then just change priorities or weights in the sub group. So let’s say you’ve got DBs grouped from 1-100 using either method, but we’ll say it’s by weight for now. If you have to make one DB a higher weight by one you’d have to shift everything else up one.
However, if you have that DB in a major group of say 10 DBs, and they all have weights within that group then you only have to change the weights within that one group. So whereas you would before have had to change the weights of 30 DBs, now you only have to change under 10.

I tend to use a weighted system more, but I have nothing against a priority system. And I prefer the 2 group system even more.

Powershell to Excel

I got this question on twitter recently so I decided to blog it real quick. The question is, how do you output data to excel from powershell?

Well, there’s a free function that I’ve used and I’ve been quite happy with it. I’m not going to give a lot of explanation here except to say that when using it on a large dataset I’ve found it to be painfully slow. And by painfully slow I mean several hours. However, using it with a small amount of data isn’t too bad. I like it because it does all the lifting for me. I just output to the function and I’m done. So anyway, here’s the link to the site. The dev does a good job of giving examples.

http://www.lucd.info/2010/05/29/beyond-export-csv-export-xls/

What is an Overload?

When I teach powershell the question I get from DBAs most often is what is an overload. Because when we get into methods you can’t go very long at all without stating that they’re overloaded and most DBAs just don’t live in that world enough to know programming terms like that.

Overloading is when a function, method, or whatever can be called with different sets of parameters. Let’s take a good example of calling the ChangePassword() method under the Logins context. I’m gonna walk you through this step by step because I want to make sure you understand.

Let’s start by opening SSMS and right-clicking on Logins. Then to go Start Powershell.
You’ll be presented with a prompt in the Logins context. Now, if you do a get-member you’ll see a list of properties and methods. We’re concerned with the methods here. As a quick aside, a property is an adjective. It tells you something about the object, while a method is a verb. It’s something you can do to the object. So in our case you can see several properties like SID. SID is a property of a login… it describes something about the login. But the method ChangePassword() is a verb. It’s something we can do to the login. Methods are like functions in SQL more or less. Anyway…

So here’s a query you can type to get down to the info we’re interested in. I’m going to narrow the results down to just the ChangePassword() method we’re interested in so we don’t have to pick through a bunch of results.

dir | gm | ?{$_.name -eq "ChangePassword"} | FL

Here’s a screenshot of the results.
LoginsGM

Ok, now we can actually talk about the overloads. Notice that in the Definition column you there are 3 ways to call this method.

1. System.Void ChangePassword(string newPassword)
2. System.Void ChangePassword(string oldPassword, string newPassword)
3. System.Void ChangePassword(string newPassword, bool unlock, bool mustChange)

In the 1st one you can simply pass in the new password.
In the 2nd one you can pass in the old password and the new password.
and in the 3rd one you can pass in the new password along with a bit to unlock the acct and whether they must change it when they login next time.

Each one of these sets of parameter combinations is called an overload. Sometimes you’ll see a method that has no overloads. That means there’s only one way to call it. The Refresh() method is like this. It’s not overloaded because you can only call it one way… with no parameters.

dir | gm | ?{$_.name -eq "Refresh"} | FL

RefreshGM

You can see from the output here that there is only one item listed in the Definition column. There’s only one way to call the Refresh() method, and that’s with no parameters. If you try to call it any other way you will get an error.

So a method can either be overloaded or not. And this is how you know how to call any method. You do a get-member on it and look at its definition. You’ve probably noticed by now that the definition also gives you the data types and functions of the parameters it’s expecting. Above for the ChangePassword() method you can see that the newPassword parameter is a string. So you automatically know you’re going to call it like this:
ChangePassword(“MyN3wPassw0rd!!!”)
And you know the 3rd overload takes a string and 2 booleans. So it’ll look like this:
ChangePassword(“MyN3wPassw0rd!!!”, 1, 1)
or like this:
ChangePassword(“MyN3wPassw0rd!!!”, $true, $true)

Ok guys, that’s about it for overloads. I wanted to give a nice thorough explanation for the beginners out there. Sometimes these concepts can be a bit foggy to understand and .Net guys always explaining things with .Net terms doesn’t really help. So I wanted to try to put it in nice plain English.
Let me know if you have any questions.

What an idiot!

As DBAs we quite often run into others who aren’t as smart as us.  The dev is an idiot.  The .net guy is an idiot.  The users are idiots.  The manager is an idiot.  The VP, well don’t even get me started.  And other DBAs are really idiots.  At least that’s how it is in our heads anyway.  We fall into this cycle of calling everyone idiots for every little thing they do wrong.  The dev uses a wrong data type and it makes a few queries a lot slower, what an idiot, he should’ve known better.  A .net guy uses EF instead of putting it in an SP and it causes tons of blocking, what an idiot.  Another DBA tries to fix a DB that’s down and he does something that ends up making it worse… what an idiot.

It’s pretty easy to say everyone’s an idiot when we have the luxury of hindsight isn’t it?  Sure, I could have told you that every single one of those decisions was wrong and why.  But could I have told you before you did it and it went south?  Maybe, maybe not.  I’ve made plenty of mistakes in my career (and am dedicated to continuing) that in hindsight weren’t the best option, but was I an actual idiot for doing it?  Again maybe, maybe not.

I just think we jump on the idiot bandwagon too early and too often.  And I know I’m a big offender.  It doesn’t take much for me to start branding people left and right, but I also try to temper it with some reason.  Just because someone doesn’t have the same experiences I do doesn’t make them an actual idiot.  A dev chooses the wrong data type for a column.  Is he an idiot, or does he just not have the same experience with data types that I do?  I’d have to say it depends on what the mistake was.  Did he choose a varchar(25) for Address, or did he choose datetime?  Because one makes him less experienced with addresses and the other one makes him pretty close to an idiot. Well, what if he chose the bit data type for a SalaryAmount column? Well, I can only hope that he’s writing the table his own salary will be stored in.

I’ve seen plenty of things that seemed to be basic that I didn’t know. And that’s becuase there’s just so much to know it’s hard to quantify. That’s why I make sure I interview everyone I see for at least an hour before making a decision. I honestly believe you can’t judge the sum of someone’s experience in just a handful of questions. In fact, I’ve found plenty of guys who got the first 20 questions wrong and then we suddenly got to their area of expertise and they started blowing the questions out of the water.

So anyway, just give some of these guys a break and realize that they may not be complete idiots just because they don’t know something you don’t. That’s not to say there aren’t any real idiots out there. You guys know that I’ve definitely run into my fair share of them. But I’m trying harder to lighten up on them.

Part of the problem is the learning process we go through, which is next to none. Computers are hard. SQL is hard. .NET is hard. They’re all hard. And yet training is so poor. I’ve seen so much IT training I can’t even count, but the number of courses I’ve been in that actually taught the topic is very few. sure, the high level stuff gets taught, but the hows and whys of doing things is rarely covered. There are some guys out there who really take the time to break it down for you, but try to find one of them. One of the biggest reasons I never got into BI is because all the BI guys teach beginning BI like you’re already a BI expert. They explain BI terms with other BI terms and everyone just nods and smiles. But I guarantee you that most of them walk away without a good understanding of what was just said. .Net guys are big offenders in that area too. They explain .Net to you like you’ve been a coder for years and you’re just supposed to know what all this stuff is. So it’s no wonder that so few people really know their jobs well. They’re never taught what they need to know. So are they really idiots for not knowing something they weren’t taught? There are so many things that can go wrong with a system at any given time how can they be sure that the issue is being caused by a bad data type, or by one particular piece of code? There are of course ways to find out, but so many companies are in such a hurry to move on to the next project they never get a chance to dig into these issues. And again, were they really taught how?

So here we are in the middle of the learning revolution and there’s so little quality training to be had. You can go almost anywhere and learn how to perform the steps for a task, but where do you go to learn what you actually need to know? How do you learn that one thing is stupid over another thing, and that other thing exists for a reason, so when is it supposed to be used? I was talking to someone about this very topic just this morning.

So this whole thing was prompted by a training session I had with someone not long ago. Someone did something they shouldn’t have and when I corrected them they asked why. And when I gave my reason he said oh y, I never thought of that. And I could clearly see that he wasn’t an idiot, he just didn’t have the experience he needed. And since then he’s done it right and even did it the other way a couple times because the situation was different. See, I gave him the reasoning so now he can reason out for himself when to use one method over another. And that’s training that’s worthwhile.

To me, a true idiot is someone who gets shown the way to do things right and still refuses to employ them. He is also someone who has been in his current career for many years and doesn’t even know the basics. I have very little patience for say a SQL dev who’s been doing it for 10yrs and doesn’t even know the basics of the data types. Because you can’t tell me that it’s never come up. I also don’t like DBAs with 10yrs behind them who can’t write a simple backup statement. Again, that’s a basic that you should know cold.

Nothing beats practice

For a very long time now I’ve preached to my various teams the value of doing your 20s.  And I don’t expect most of you to know what that is, so here’s a quick explanation.  I believe there’s certain syntax you should know by heart.  Things like backup/restore, DB creation, working with user accts, etc. should be so ingrained you never forget them.  And the only way to do that is to do it again and again and again.  So when I start new DBAs down their road to discovery, I start them on their 20s.  And that means writing the command you’re working on 20x a day for 2 weeks.  And by the end of the period, you’ve written it at least 200x.  Now, in the case of backup/restore, I typically pick a test server with around 10-15 DBs (I often create empty DBs) on it and make them type the commands for every DB 20x a day for 2 weeks.  And that means that they’re writing the commands 200x for each DB.  And a little quick math tells you that the total they’ve written the backup command by the end is at least 2,000x.  Now, name me something you’ve done 2,000x and have forgotten easily. 

The whole point of this is to give them tons of experience with the commands in a very short period of time.  Innovation and understanding don’t come when you’re struggling to remember syntax and they don’t come if you don’t know any of the options available to you.  So by forcing my DBAs to learn like this, I’ve found that they really start to become competent and ultimately they appreciate that I forced this on them.  And it’s incredible how simple things start getting when you know what you can and can’t do and now that you know how to code it you’ve got a world of options available to you.  And I still find way too many DBAs are button monkeys and don’t know anything that isn’t presented to them in a wizard.

So anyway, the main reason I wrote this is to say that it’s proven itself to me today.  We needed to add a PK to a table and I was in a room with a bunch of SQL devs.  They started to lookup the syntax and I said, wait, give me a crack at it.  I sat down and busted it out without any slips.  And the thing is that this is one of the things I’ve practiced a lot.  But I haven’t touched it in almost a year so I’m a little surprised to find that it didn’t leave me.  Then again, when you do it as much as I have you won’t forget it anytime soon.  So my point here is that if you practice enough you can go a few months without doing it and you’ll still be ok.  And the funny thing is I didn’t even really remember the syntax myself.  My fingers did it for me.  I love how muscle memory works.  So hey, if you really want to get good at SQL then practice some of the more common syntax.  Do your 20s.  You’ll really be glad you did.  And you’ll get things done a lot quicker cause you’re not always looking up basic syntax.

Why load test?

I was recently asked to document some justifications around buying a load testing tool like Benchmark Factory.  And for the record, while it’s not the only tool out there, I’m a huge fan of Benchmark Factory.  I’ve used it for years and it’s a tremendous tool.

So anyway, I just jotted down some things in an email and here I’ve just pasted it directly into this post.  And it’s not an inclusive list, but I think it covers pretty much everything you’d need to get someone to write a check.

Here are a couple use cases for load testing builds before pushing into prod:

  1. Lock contention – there are 2 types of lock contention, live locking and deadlocking.  Both of these can wreak havoc on a prod system and should be tightly monitored and controlled.  And even minor changes to simple code can cause a drastic increase of either of these 2 locking scenarios.  And while there’s almost no way to eliminate these effects in prod entirely, you can ensure that you greatly reduce them by proper planning.  Currently the best way to mitigate these issues in prod is to catch them before they go into prod.  And the best method for catching these scenarios in pre-prod environments is to perform load testing.  If your load test mimics your prod environment closely enough, then you should see this increased activity and make the necessary changes before it impacts your customers.
  2. Rollout testing – not all DB changes require the system to be down.  And being able to load test with a production-level scenario will ensure that you don’t take unnecessary downtimes because you’ll know exactly what to expect when the rollout happens.
  3. Capacity planning – this is perhaps the most overlooked, but also one of the most important aspects of production work.  Being able to certify your server to a certain number of users or a certain workload is essential in capacity planning.  In these load tests you can increase the number of users, increase the amount of data, etc. to see how your system will behave as your business grows.  You can also see where it will fail first.  So as your business doubles, where will you see the effects first?  Will it be CPU, disk, blocking, etc?
  4. Performance – Not only can you test the perf of the system in general, but you can also test the performance of specific methods of doing things.  If you’re developing a new process and you need to see which method will perform the best, there’s no substitute for load testing it to see which one is best.  And you also see if there are going to be any locking or indexing issues.  For developing new processes, there’s no better way to solidify a new method.
  5. Upgrade – test upgrades of both the DB system itself, but also patches and hardware.  Often times a perceived improvement doesn’t materialize in prod or can even have unanticipated side effects.  Load testing these scenarios can greatly reduce surprises as well as ensure you’re putting efforts where they need to be.

Red-Gate is Badass!!

Ok, so the other night I uninstalled my R2 install on my box and put on the new SQL2012.  I know upgrade exists, but I just wanted the thing on there and didn’t feel like messing with anything stupid going wrong.  So anyway, I was half way through my uninstall when I remembered, Oh Crap, I’m gonna lose SQLPrompt cause SQL2012 just came out.  I was pretty upset because I use that tool all the time.

So While Denali was installing, I went out to Red-Gate to see if they had it listed anywhere when SQLPrompt would support Denali… and you know what?  It already does.  I installed the newest version and it works beautifully in SQL2012.  And that’s not all.  They’ve got this kewl new thing they’re doing where they let you see beta features.  They call it Experimental Features and it’s at the bottom of the SQL Prompt 5 menu.  Here’s what it looks like.

These are beta features and you’re welcome to install them and try them out as you wish.  Personally I love this because I quite often want stuff like this and even if there are a couple bugs it’s still better than not having it at all.

So Red-Gate, I salute you for having this ready in time for the release and for doing such a good job with presenting us with these new features.  I want to have your babies.