Category Archives: Development

XP_CmdShell isn’t Evil

I’ve been hearing it more and more the past year.
“XP_cmdshell should always be turned off.”
“Whatever you do, don’t turn on XP_cmdshell!”
“We can’t do that, it requires XP_cmdshell!”
“You’ll fail your audit if XP_cmdshell is turned on.”
And all the other variations.

And I suppose I’ve been hearing it more and more lately because Minion Reindex requires it and Minion Backup will require it even more so.

However, I’ll tell you I’m getting pretty tired of hearing it so true to my blog I’m going to rant.
XP_cmdshell has been around forever. And way back in the day, like 15-20yrs ago, it was installed wide open to the public. This is where the problem started. This was back in the day when SQL’s GUI allowed way too many people who had no idea what they were doing to create and manage DBs. That ease of use was a huge part of SQL Server taking hold in the industry. However, with the product being that easy to use, a lot of these untrained DBAs had no idea XP_cmdshell was even there, so their instance was completely vulnerable and they didn’t even know it. Honestly, this was Microsoft’s fault. They should never have packaged up something that dangerous completely open to the public. But you know what, back then they were also installing sa with a NULL password by default too. And Oracle had their scott\tiger username\password combo, so MS wasn’t the only one doing dumb security back then.

However, now XP_cmdshell comes turned off and when you enable it, it’s not open to public anymore. So seriously, what are you still afraid of? I understand that you used to be scared of it because there was no way to lock it down back then. In fact, Microsoft didn’t provide a way to lockdown XP_cmdshell until somewhere in the neighborhood of version 4.2. So back when it was open to public I can see how writing a DENY statement would be really taxing to you as a DBA.
But these days you don’t have any excuses. You have to go out of your way to open it up to public. XP_cmdshell is still really useful and I’m personally able to create many excellent solutions using it… things that would be much more difficult otherwise. And do you know what I tell people who tell me how dangerous it is? I ask them why they don’t lock it down.

Think about it… there are many dangerous features in SQL. And they’re all kept in check by controlling permissions to them. You don’t see anyone screaming that those other features should be allowed on the box because they just say, we use it but we keep its usage controlled pretty tightly. So why doesn’t that apply to XP_cmdshell? Do you think that SQL all of a sudden forgets how to deny execute perms when that gets called? Do you think that SQL honors all security except that one? Do you think XP_cmdshell is powerful enough to override SQL security and just do what it wants anyway?
Of course not. So what are you afraid of?

The truth is that XP_cmdshell can do a lot and in the wrong hands it can make a royal mess of things. Then again so can DELETE and UPDATE. So can SHUTDOWN. So can CLR. So can DROP DATABASE. So can Dynamic SQL. And you don’t see anyone saying that all of those should never be allowed on any server for any reason. And I would honestly venture to say that Dynamic SQL has been the cause of far more security breaches than XP_cmdshell ever has. I don’t have any numbers to back me up, but I bet if you look at the number of security issues caused by XP_cmdshell, they’re far out-weighed by other features.

And it’s not like people have to way to get that functionality just because XP_cmdshell is disabled. There are still cmdline job steps and cmdline SSIS tasks. And of course, you’ve got CLR. All of which can be just as dangerous as XP_cmdshell yet they run on systems all the time. And I know what you’re thinking… “But Sean, we control those through permissions so they can’t do anything really bad.” Yeah, so you’re making my point for me. But do you think that if an SSIS guy wanted to do something bad to your box that he couldn’t find a way if he weren’t locked down? Of course he could.

The cool thing about the cmdline task in Agent jobs is that they can be run via proxy. You can setup a proxy user to run that step under so that its Windows perms are limited and it can’t run haywire. You wanna hear a secret? There’s a built-in proxy mechanism for XP_cmdshell too. I could tell you how to do it, but DatabaseJournal has already done such a fine job. So here’s the link to setting up the cmdshell credential.

I don’t want you to just turn on XP_cmdshell on all of your systems for no reason. But I don’t want you to completely rule it out as a solution just because you’re afraid of it. Tell your Windows admins who are afraid of it to mind their own business and stick to what they know. You’re a DBA and it’s time for you to take back your SQL instances. Lock them down. Don’t be afraid to use cool functionality because so many people refused to read the documentation 20yrs ago. You know better now. So go out there and do the right thing. Lockdown XP_cmdshell, but use it.

Priority vs. Weight

Ok, so this is actually Sean writing this time.
I thought I’d hit the ground running with a great topic that came up just this morning.

The issue is how do you prioritize tasks in code? So let’s say you’re going to process a list of DBs for processing, but you also want to do them in a specific order. You’ve got 2 choices really when deciding on how to do this: Priority and Weight.

Priority:
This one is probably the most common. This is say when you have a list of objects and you rank them from say 1-10. In this case 1 is the top priority with 10 being the least. This is how a priority system works. Items move up the list to the top position.

Weight:
A weight system works just the opposite way. You give more weight to items by giving them a higher number. This extra weight makes them more important so they get processed first. So in the 1-10 list, 10 would be processed first because it has more weight.

So what’s the difference between these? Well a priority system I think is easier with fairly static priority code. You’re not going to be making any real changes to the priorities or the objects very often.

A weight system is better when your list and priorities are more dynamic.

Let’s take a look at an example:
You want to process some DBs for something. You pull in the list of DBs and you want them in a specific order. So you put them in #DBs with an Ordering col.
If there won’t be any new DBs in the list for a long time, and you’re not likely to change the order, then a priority system is good for you. However, if you’re going to be adding more DBs, and those DBs will possibly even be higher priority than the current ones, then a weight system is best. The reason is that if you’ve already got the number 1-10 populated, and something new pops up in the list that’s higher priority than anything else, you’d have to set that to 1 and then shift all the others down by 1. So what used to be 1 is now 2, what used to be 2 is now 3, etc. However, with a weight system, you can add as many new items as you like, and you just keep increasing the number. So in this case the new item would be assigned 11 because it’s more important than anything else and has more weight.

Of course if you have to arrange the weights of any of the lower ones it does become just a bit more tedious. In this case you’ll have possibly make another sub-group column. This way you can keep your major groups and then just change priorities or weights in the sub group. So let’s say you’ve got DBs grouped from 1-100 using either method, but we’ll say it’s by weight for now. If you have to make one DB a higher weight by one you’d have to shift everything else up one.
However, if you have that DB in a major group of say 10 DBs, and they all have weights within that group then you only have to change the weights within that one group. So whereas you would before have had to change the weights of 30 DBs, now you only have to change under 10.

I tend to use a weighted system more, but I have nothing against a priority system. And I prefer the 2 group system even more.

Powershell to Excel

I got this question on twitter recently so I decided to blog it real quick. The question is, how do you output data to excel from powershell?

Well, there’s a free function that I’ve used and I’ve been quite happy with it. I’m not going to give a lot of explanation here except to say that when using it on a large dataset I’ve found it to be painfully slow. And by painfully slow I mean several hours. However, using it with a small amount of data isn’t too bad. I like it because it does all the lifting for me. I just output to the function and I’m done. So anyway, here’s the link to the site. The dev does a good job of giving examples.

http://www.lucd.info/2010/05/29/beyond-export-csv-export-xls/

Do I like Speaking? — T-SQL Tuesday #41

TSQL2sDay150x150

Well, I actually have a love/hate relationship with speaking.  I’ve read that there are 2 types of speakers, those who get nervous before going on stage, and liars.  Sure, I don’t expect that you’ll have the same level of nerves after you’ve been doing it for 10yrs, but there’s still going to be a bit of butterflies going on no matter what. 

I think it’s the process I like though.  Coming up with an idea is crucial.  I can never come up with anything when I want to.  My best sessions are inspired and they pretty much write themselves once I get the right inspiration.  However, there’s still getting all the demos ready, and figuring out exactly what I want to say.  It’s a big part of the process.

Next, it’s submitting the abstract.  This can be just as hard as doing the session itself because you have no idea who’s on the other end reading it and coming up with the exact wording that’ll get you picked is a crap shoot most of the time.  I’ve seen popular, experienced speakers get turned down in lieu of “nobodies”.  So sitting there with someone, in my case Jen, and coming up with that poetry that’ll get you noticed over all the others is a fun part of the process.  It’s like going up on a rollercoaster.  You’re heading up that first hill just waiting to see if you’re going on the rest of the ride or if it’s going to break down and leave you sitting at the top having to walk back down.  It’s kinda like a little nerdy xmas.

Now you’re at the event and it’s the day of your session and time to make sure demos are working.  Well, are they?  You get them all worked out, get your environment reset and do it again.  Then reset it again.  Everything’s perfect.  Time to get to your session without a moment to lose.

You’re now on the platform watching people come in and just hoping that you have decent attendance.  Then it happens.  The thing you dread every time you speak.  You see an MCM walk in and sit down in the back.  DAMMIT!  Now I’ve gotta make sure I don’t say anything stupid.  But wait, what’s this?  Another MCM?  He’s coming up to the front and promising he’ll be listening intently.  Then 2 MVPs come in together and sit next to a couple guys from the product team.  ARE YOU KIDDING ME?!?  Ok, session’s over.  I’m putting on Blazing Saddles for you guys instead .  It’s alright.  I can do this.  These guys aren’t here to see you fail, they’re just here to tweet it if you do.  No worries.

So now you start your session and your first demo has an error.  Oh come on man… I triple checked it.  WTx is the problem?  Ok, it’s easily fixable.  No worries.  Everything’s back on track.  The next demo goes well so now it’s smooth sailing.  Now the questions start coming in.  This first one is easy.  The next 2 I haven’t really thought of.  Hope I know enough about this topic off the top of my head to be able to answer it in front of the A-Team down there.  I see they’ve got their thumbs on the twitter buttons on their phones.  Now the thumbs are going crazy.  What did I say?  Was it stupid?  Can’t stop now, time to move on.  Now a complicated topic comes up and while the demo goes well questions start popping up.  Again with the thumbs guys, WTx?  Now the next topic and there are already questions.  A burning look goes to the MCMs.  If you guys even move I’m breaking your thumbs.

Session’s over now.  More questions are coming in and whether you know them or not you’re hitting your stride.  You try as hard as you can to remember some of them so you can add them to your session next time.  The MCMs come up and say they thought it was great and they never really thought about it that way before.  SCORE!!!  You pack up your stuff and leave the room and as soon as you get out you put everything down and check twitter to see what the hell those guys were tweeting so much.  “In session with MidnightDBA.  Backup tuning is awesome.” — “Backup tuning with MidnightDBA.  I’ve never seen that trace flag before.  This is the best!” — “Our tweeting is making him nervous.  Keep it up guys!”

So do I like presenting?  Hell YEAH!

I’m a Fraud

I’m a fraud because I didn’t bust out the query I needed right away.
I’m a fraud because I couldn’t remember the exact syntax to restore a certificate even though I’ve done it dozens of times.
I’m a fraud because a user asked me a question I had to lookup.
I’m a fraud because I still haven’t had time to learn Service Broker.
I’m a fraud because it took me too long to troubleshoot my last PS script.
I’m a fraud because I only study about a half as much as I need to.
I’m a fraud because I haven’t written a book.
I’m a fraud because I haven’t filmed half the new videos on my list.
I’m a fraud because I don’t understand all the ins and outs of SMO.
I’m a fraud because every other MCM and MVP out there is so much smarter than I am.
I’m a fraud because I had to go running to a real .NET guy to have my site rebuilt instead of doing it myself.

And on and on…

See, we’re all a product of the same marketing.  We see all the super smart guys around us and we believe the marketing around them.
Grant Fritchey didn’t have to research anything for his book.  He knew it all off the top of his head. 
Adam Machanic couldn’t write his last book fast enough.  He banged it out like he was signing his name.  And Denny Cherry knew everything for his last security book and didn’t have to look anything up or test anything.

All we see is the final product.  We don’t see the weeks/months of failed attempts, tinkering, research, and begging for help that go into the books, whitepapers, blogs, and videos that people crank out.

Sure, some have to beg, borrow, and steal less than others, but nobody and I mean NOBODY just sits down and bangs out an entire book on the first try.  And nobody writes an entire training course in a single sitting and then just films it.  It takes planning, practice, and experimentation to make everything look easy.  And everything looks easy by that point because it is.  They’ve already learned it.  In fact, sometimes they may have even learned it for the book, or for that video.

This is what’s both good and bad about the way we do things these days.  We have access to so much material there’s just no excuse to not be able to find something.  But at the same time, having access to that much info and that many authors shows us exactly what we don’t know and it’s easy to compare yourself with them and start to feel like a fraud.  And the industry experts aren’t immune to it either.  Trust me, they all do it too.

I always say, don’t confuse your mentor with your mirror.  Find someone in the community you want to be like, but don’t compare yourself to them.  You’ll always lose that battle.  Compare yourself to how you were yesterday, or last week or last month.  Give yourself something solid to accomplish and then do it.  If you really want to learn Service Broker, then make it happen.  Put yourself a training list and start working your way through it.  Then look at your list in say a couple weeks or a month and see how much you’ve gotten done.  Are you a fraud then?  Are you a failure?  Maybe, maybe not, right?

So is it possible to actually be a fraud, or a failure?

Of course it is.  In my estimation, anyone who’s been a professional DBA for 10yrs and doesn’t even know the basics is a complete fraud.  You have to know something about your craft, and you can’t tell me that in 10yrs you’ve never had a chance to learn the difference between char() and varchar(), or how to type a backup command, etc.  If you’ve been doing SQL for that long and you don’t even know simple basics like that, then yeah, you very well may be a fraud and you need to do something about it.

However, if you think you’re a fraud because you’re comparing your on-going process of building yourself to the finished product you see the industry experts churning out, then you just need to compare yourself to the right thing.  Making that comparison is like starting to build a skyscraper and comparing it to the one next to yours that’s finished and just disappears into the clouds.  What you don’t see though is the construction going on on the other side of those clouds, or the remodeling happening on several of the floors, or the bad plumbing.

So keep your head down and build your own tower.  Look to the others for inspiration and guidance, but don’t compare yours to theirs. 

Hey, if you want a really want to see how perfect the experts are from the start, then just ask any of them, and I do mean any of them, if they’re embarrassed by the code they wrote 10yrs ago. 
Almost without exception, they’ll all say yes.

So now it’s your turn… why are you a fraud?

What is an Overload?

When I teach powershell the question I get from DBAs most often is what is an overload. Because when we get into methods you can’t go very long at all without stating that they’re overloaded and most DBAs just don’t live in that world enough to know programming terms like that.

Overloading is when a function, method, or whatever can be called with different sets of parameters. Let’s take a good example of calling the ChangePassword() method under the Logins context. I’m gonna walk you through this step by step because I want to make sure you understand.

Let’s start by opening SSMS and right-clicking on Logins. Then to go Start Powershell.
You’ll be presented with a prompt in the Logins context. Now, if you do a get-member you’ll see a list of properties and methods. We’re concerned with the methods here. As a quick aside, a property is an adjective. It tells you something about the object, while a method is a verb. It’s something you can do to the object. So in our case you can see several properties like SID. SID is a property of a login… it describes something about the login. But the method ChangePassword() is a verb. It’s something we can do to the login. Methods are like functions in SQL more or less. Anyway…

So here’s a query you can type to get down to the info we’re interested in. I’m going to narrow the results down to just the ChangePassword() method we’re interested in so we don’t have to pick through a bunch of results.

dir | gm | ?{$_.name -eq "ChangePassword"} | FL

Here’s a screenshot of the results.
LoginsGM

Ok, now we can actually talk about the overloads. Notice that in the Definition column you there are 3 ways to call this method.

1. System.Void ChangePassword(string newPassword)
2. System.Void ChangePassword(string oldPassword, string newPassword)
3. System.Void ChangePassword(string newPassword, bool unlock, bool mustChange)

In the 1st one you can simply pass in the new password.
In the 2nd one you can pass in the old password and the new password.
and in the 3rd one you can pass in the new password along with a bit to unlock the acct and whether they must change it when they login next time.

Each one of these sets of parameter combinations is called an overload. Sometimes you’ll see a method that has no overloads. That means there’s only one way to call it. The Refresh() method is like this. It’s not overloaded because you can only call it one way… with no parameters.

dir | gm | ?{$_.name -eq "Refresh"} | FL

RefreshGM

You can see from the output here that there is only one item listed in the Definition column. There’s only one way to call the Refresh() method, and that’s with no parameters. If you try to call it any other way you will get an error.

So a method can either be overloaded or not. And this is how you know how to call any method. You do a get-member on it and look at its definition. You’ve probably noticed by now that the definition also gives you the data types and functions of the parameters it’s expecting. Above for the ChangePassword() method you can see that the newPassword parameter is a string. So you automatically know you’re going to call it like this:
ChangePassword(“MyN3wPassw0rd!!!”)
And you know the 3rd overload takes a string and 2 booleans. So it’ll look like this:
ChangePassword(“MyN3wPassw0rd!!!”, 1, 1)
or like this:
ChangePassword(“MyN3wPassw0rd!!!”, $true, $true)

Ok guys, that’s about it for overloads. I wanted to give a nice thorough explanation for the beginners out there. Sometimes these concepts can be a bit foggy to understand and .Net guys always explaining things with .Net terms doesn’t really help. So I wanted to try to put it in nice plain English.
Let me know if you have any questions.

What an idiot!

As DBAs we quite often run into others who aren’t as smart as us.  The dev is an idiot.  The .net guy is an idiot.  The users are idiots.  The manager is an idiot.  The VP, well don’t even get me started.  And other DBAs are really idiots.  At least that’s how it is in our heads anyway.  We fall into this cycle of calling everyone idiots for every little thing they do wrong.  The dev uses a wrong data type and it makes a few queries a lot slower, what an idiot, he should’ve known better.  A .net guy uses EF instead of putting it in an SP and it causes tons of blocking, what an idiot.  Another DBA tries to fix a DB that’s down and he does something that ends up making it worse… what an idiot.

It’s pretty easy to say everyone’s an idiot when we have the luxury of hindsight isn’t it?  Sure, I could have told you that every single one of those decisions was wrong and why.  But could I have told you before you did it and it went south?  Maybe, maybe not.  I’ve made plenty of mistakes in my career (and am dedicated to continuing) that in hindsight weren’t the best option, but was I an actual idiot for doing it?  Again maybe, maybe not.

I just think we jump on the idiot bandwagon too early and too often.  And I know I’m a big offender.  It doesn’t take much for me to start branding people left and right, but I also try to temper it with some reason.  Just because someone doesn’t have the same experiences I do doesn’t make them an actual idiot.  A dev chooses the wrong data type for a column.  Is he an idiot, or does he just not have the same experience with data types that I do?  I’d have to say it depends on what the mistake was.  Did he choose a varchar(25) for Address, or did he choose datetime?  Because one makes him less experienced with addresses and the other one makes him pretty close to an idiot. Well, what if he chose the bit data type for a SalaryAmount column? Well, I can only hope that he’s writing the table his own salary will be stored in.

I’ve seen plenty of things that seemed to be basic that I didn’t know. And that’s becuase there’s just so much to know it’s hard to quantify. That’s why I make sure I interview everyone I see for at least an hour before making a decision. I honestly believe you can’t judge the sum of someone’s experience in just a handful of questions. In fact, I’ve found plenty of guys who got the first 20 questions wrong and then we suddenly got to their area of expertise and they started blowing the questions out of the water.

So anyway, just give some of these guys a break and realize that they may not be complete idiots just because they don’t know something you don’t. That’s not to say there aren’t any real idiots out there. You guys know that I’ve definitely run into my fair share of them. But I’m trying harder to lighten up on them.

Part of the problem is the learning process we go through, which is next to none. Computers are hard. SQL is hard. .NET is hard. They’re all hard. And yet training is so poor. I’ve seen so much IT training I can’t even count, but the number of courses I’ve been in that actually taught the topic is very few. sure, the high level stuff gets taught, but the hows and whys of doing things is rarely covered. There are some guys out there who really take the time to break it down for you, but try to find one of them. One of the biggest reasons I never got into BI is because all the BI guys teach beginning BI like you’re already a BI expert. They explain BI terms with other BI terms and everyone just nods and smiles. But I guarantee you that most of them walk away without a good understanding of what was just said. .Net guys are big offenders in that area too. They explain .Net to you like you’ve been a coder for years and you’re just supposed to know what all this stuff is. So it’s no wonder that so few people really know their jobs well. They’re never taught what they need to know. So are they really idiots for not knowing something they weren’t taught? There are so many things that can go wrong with a system at any given time how can they be sure that the issue is being caused by a bad data type, or by one particular piece of code? There are of course ways to find out, but so many companies are in such a hurry to move on to the next project they never get a chance to dig into these issues. And again, were they really taught how?

So here we are in the middle of the learning revolution and there’s so little quality training to be had. You can go almost anywhere and learn how to perform the steps for a task, but where do you go to learn what you actually need to know? How do you learn that one thing is stupid over another thing, and that other thing exists for a reason, so when is it supposed to be used? I was talking to someone about this very topic just this morning.

So this whole thing was prompted by a training session I had with someone not long ago. Someone did something they shouldn’t have and when I corrected them they asked why. And when I gave my reason he said oh y, I never thought of that. And I could clearly see that he wasn’t an idiot, he just didn’t have the experience he needed. And since then he’s done it right and even did it the other way a couple times because the situation was different. See, I gave him the reasoning so now he can reason out for himself when to use one method over another. And that’s training that’s worthwhile.

To me, a true idiot is someone who gets shown the way to do things right and still refuses to employ them. He is also someone who has been in his current career for many years and doesn’t even know the basics. I have very little patience for say a SQL dev who’s been doing it for 10yrs and doesn’t even know the basics of the data types. Because you can’t tell me that it’s never come up. I also don’t like DBAs with 10yrs behind them who can’t write a simple backup statement. Again, that’s a basic that you should know cold.

Nothing beats practice

For a very long time now I’ve preached to my various teams the value of doing your 20s.  And I don’t expect most of you to know what that is, so here’s a quick explanation.  I believe there’s certain syntax you should know by heart.  Things like backup/restore, DB creation, working with user accts, etc. should be so ingrained you never forget them.  And the only way to do that is to do it again and again and again.  So when I start new DBAs down their road to discovery, I start them on their 20s.  And that means writing the command you’re working on 20x a day for 2 weeks.  And by the end of the period, you’ve written it at least 200x.  Now, in the case of backup/restore, I typically pick a test server with around 10-15 DBs (I often create empty DBs) on it and make them type the commands for every DB 20x a day for 2 weeks.  And that means that they’re writing the commands 200x for each DB.  And a little quick math tells you that the total they’ve written the backup command by the end is at least 2,000x.  Now, name me something you’ve done 2,000x and have forgotten easily. 

The whole point of this is to give them tons of experience with the commands in a very short period of time.  Innovation and understanding don’t come when you’re struggling to remember syntax and they don’t come if you don’t know any of the options available to you.  So by forcing my DBAs to learn like this, I’ve found that they really start to become competent and ultimately they appreciate that I forced this on them.  And it’s incredible how simple things start getting when you know what you can and can’t do and now that you know how to code it you’ve got a world of options available to you.  And I still find way too many DBAs are button monkeys and don’t know anything that isn’t presented to them in a wizard.

So anyway, the main reason I wrote this is to say that it’s proven itself to me today.  We needed to add a PK to a table and I was in a room with a bunch of SQL devs.  They started to lookup the syntax and I said, wait, give me a crack at it.  I sat down and busted it out without any slips.  And the thing is that this is one of the things I’ve practiced a lot.  But I haven’t touched it in almost a year so I’m a little surprised to find that it didn’t leave me.  Then again, when you do it as much as I have you won’t forget it anytime soon.  So my point here is that if you practice enough you can go a few months without doing it and you’ll still be ok.  And the funny thing is I didn’t even really remember the syntax myself.  My fingers did it for me.  I love how muscle memory works.  So hey, if you really want to get good at SQL then practice some of the more common syntax.  Do your 20s.  You’ll really be glad you did.  And you’ll get things done a lot quicker cause you’re not always looking up basic syntax.

SODA Architecture

I was asked by a user to write this post.  And while I’m gonna beat around the bush a little, it’s only to give you some background so you understand the context of what this is all about.  Because everything’s connected and my goal here is to show you how a bunch of these things are related, especially when you break them down into essentials.

Quite often you’ll have a DB that just doesn’t perform no matter what you do to it.  You’ve spread-out the I/O, you’ve indexed the hell out of it, and you’ve thrown hardware at it.  And maybe it’s not strictly performance you’re concerned with.  Maybe you’re concerned with backup/restore time, or security, or something else.  Well, enter SODA.  SODA stands for Service-Oriented Data Architecture.  And what it really means to do is to split your DB into different pieces based off the services they’re going to perform.  For those of you who are really DB literate, that’s all you need, but I’ll go ahead and flesh this out a little bit for those of you who would like a little more explanation.

Background Basics

Now, before I really get into SODA itself, I’m going to go into some performance basics to give you some background on how this fits into the grand scheme of things.  Let’s take a look at disk access. 

Of course, whenever you need to get resting data, you need to get it from disk, and whenever you get it from disk, that’s a physical I/O that needs to be performed.  And like anything else that performs physical work, you can overload the disk and requests start to build-up.  And this is what the disk queue counters in perfmon are all about.  The disk queue counters tell you how many I/O requests are currently waiting in line (the queue).  And of course the larger the queue, the further behind the disk is. 

That’s not entirely relevant these days because most of us are using some kind of SAN and these counters are harder to measure on SANs.  The reason is because in order to interpret disk queues you need to know how many disks are in your array.  Every disk array in your system has an acceptable disk queue limit.  And what that functionally means is that if you have a single disk, say in your laptop, that single disk has a number of requests it can fit in its queue before it can be considered “falling behind”.  So there’s a certain level of queuing that can happen and still be considered normal behavior.  And of course, with all the different drive vendors and types of drives out there, you can imagine that this acceptable queue number would be different for them all.  And technically, you’re right.  If you want to know the exact acceptable disk queue for your given drive, you need to dig through the vendor specs and then take a lot of the numbers in there and plop them into a nice long formula that spits out the number of the acceptable disk queue for that drive.  And I’ll tell you right now that’s a bit hard to manage for all but the most dedicated IT folks (and even for them too, really). 

So the way we get around this is we just use a standard measure of 2 per disk.  So for any given single disk you can have an acceptable disk queue of 2.  That means that if you have an array with 10 disks in it, then you can have an acceptable disk queue of 20.  Get it?  That’s why this measure isn’t really as useful on a SAN because quite often you don’t have any real visibility into the specs of your SAN arrays.  Without this information, you can’t really say what an acceptable disk queue would be because you don’t know how many disks are in the array.  And, SANs are all mostly shared amongst many apps anyway so you don’t even have dedicated storage to begin with.  The counter you’ll use to diagnose disk problems on a SAN is avg. disk secs/read (or write).

Ok, so that seems like a pointless digression, but it ties in.  Like I said though I just wanted to give you a little background so we’re more or less on the same page.  Now that we have that background, we can say that in a situation where we’re disk queuing, we have an underlying cause for our issue.  What would you say that underlying cause is?  Go ahead and take a second to consider it real quick before I give you the answer. 

The answer is we have disk queuing because we’re overworking the disk.  We’re pushing more work at it than it can do so it has to stack up that work.  I know that sounds simplistic, and it is, but it has to be understood before we can go any further.  With that understood, how can we fix this issue?  This is the heart of what I want to talk about in this piece.  There are 2, and only 2 ways to fix this issue.  Those 2 ways are: decrease the activity on the disk or increase the throughput on the disk.  That’s it.  There’s nothing else you can do to fix a disk queuing issue.  It’s the methods for doing each of these that can get involved and where you have the most leeway to be creative.  Hang on, I’ll show you how this ties into SODA soon enough.  And of course I’m not going to get into a full discussion here about all the different ways you can solve the issue, but to give a couple examples, a common way to decrease the activity on the disk is to separate the different files onto different LUNs, and a common way of increasing the throughput is to add disks to the LUN.  Again, this isn’t going to be a full discussion on that topic, so I’m going to stop there.  We have enough info to continue with our current discussion on SODA. 

Let’s look at another common performance issue we have in DBs… that of table performance.  Again, you’ve got a single table that could be a bottleneck and you’ve got to make it perform faster.  And again, your choices are limited.  There are really only 2 ways to solve this bottleneck.  You can decrease the volume of the requests against the table, or you can increase the throughput of the table.  Well, quite often these 2 solutions go hand in hand, and while this isn’t a discussion on table performance, a couple of the more common ways to solve this problem are:

a) improve the indexes to decrease the volume of requests (by decreasing the I/O)

b) partitioning the table and placing the different partition data on different LUNs to increase the throughput of the table itself. 

So hopefully you’re seeing a pattern form here.  For any given bottleneck you’ve really only got 2 choices:  Decrease the requests or increase the throughput.  It’s in the execution of either of those where the art comes into performance tuning.

Now let’s talk about SODA.  SODA stands for Service-Oriented Data Architecture.  And its main purpose is to help get around some kind of bottleneck, typically at the system level.  That’s not the only reason to use SODA so nobody email me and say I’m an idiot cause I said the only reason to use SODA was for system-level performance.  So anyway, SODA is really just a way to break up your DB into pieces that are more manageable.

Let’s take a quick example.  Take a DB with something like 3,000 tables and thousands of SPs.  That’s bound to be a very busy system and you could easily run into both memory and CPU bottlenecks.  What people will commonly do is split up the DB based off of the services it provides.  They may split out the Shipping components from the rest of the DB, and they may split out the Orders portion as well.  This huge DB we started with could potentially be split into many separate DBs based off of the services or the functions they provide.  Then all the devs have to do is just teach these new DBs to talk to each other (usually through SSIS, BizTalk, web services, or Service Broker, etc).  You now have a DB architecture based off of the services provided by each one.  You have a Shipping DB, and an Orders DB, and a Manufacturing DB, and an Inventory DB, etc.  This is what SODA is.  It’s splitting off the different portions of a large DB into their own DBs based off of their services.  Now these DBs can go onto different servers and have their own security, resources, etc.  So you can see how this is solving the bottleneck problem.  You’re decreasing the requests on each single DB and increasing the throughput by giving them their own resources on their own servers.  And of course they can stay on the same server if you like.

And hopefully you can see why I wanted to give you the background on solving bottlenecks so you could see that this is no different.  It’s the same thing as before, just at the DB level.

Now that I’ve covered SODA, I think it only fitting that I also cover its counterpart, DDR.  DDR is Data-Dependent Routing, and it’s yet another way to manage the influx of data you’ve got coming in.  Imagine the same DB as above with many different modules inside it.  Only this time you’re not going to split it up by function, but by data range.  So maybe you’ll keep 5 copies of the exact same DB and each copy will contain a different region of data, or maybe a different country.  This is actually pretty common practice.  I’ve seen many apps that split their data apart like this.  And there are several reasons for it.  It could be performance, data size, archival, etc.  And the reason it’s called ‘data dependent’ is because quite often there’s a layer on the front end or maybe even the middle tier that decides where the request should go.  So for example, maybe you’re looking up customers in TX.  You submit the request from the front end, and it has knowledge of where the different regions are stored and it dynamically sends you to the DB that has the TX data.  So every request has to go through this process because every request potentially goes to a different DB.  These DBs can be on different servers, but that’s not always the case.  In fact, most of the time whenever I see a server with hundreds or thousands of DBs it’s because this design was used.  A good example of this is with online training companies.  I’ve see a few of them that have chosen to keep all of their corporate customers in their own DB instead of having everyone together in the same DB.  There’s no need to put them on different servers because it’s not a performance decision.  It’s just an organizational decision.  Maybe it’s based on security, maybe it’s based on data size, or maybe it was adapted from a much smaller app and this was the best way they had to expand their business model with the least effort. Now, whether that’s a good design decision or a bad one doesn’t really matter here because putting all customers in the same DB and putting them in separate DBs both have their pros and cons so it really just depends on where you want your pain to reside.

So just to summarize:

SODA – putting each function or module in a DB into its own DB.  The schemas in the new DBs will be different as they all have different tables.

DDR – Keeping several identical copies of the same DB but with different data.  The schemas will be identical, but the data will be based off of some business rule you’ve chosen.  These rules can be date, region, country, alphabetical, etc.  It can be anything you like.

Ok I think that’s about all I have on this topic.  I know I went in kind of a roundabout way of talking about this, but I really felt you should have a little context.  Don’t think of these as completely separate things because they’re not.  Everything is related and if you can break them into basics then you’ll be much better at coming up with solutions.  And everything I talked about today is just another way to either increase throughput or decrease requests. 

Why load test?

I was recently asked to document some justifications around buying a load testing tool like Benchmark Factory.  And for the record, while it’s not the only tool out there, I’m a huge fan of Benchmark Factory.  I’ve used it for years and it’s a tremendous tool.

So anyway, I just jotted down some things in an email and here I’ve just pasted it directly into this post.  And it’s not an inclusive list, but I think it covers pretty much everything you’d need to get someone to write a check.

Here are a couple use cases for load testing builds before pushing into prod:

  1. Lock contention – there are 2 types of lock contention, live locking and deadlocking.  Both of these can wreak havoc on a prod system and should be tightly monitored and controlled.  And even minor changes to simple code can cause a drastic increase of either of these 2 locking scenarios.  And while there’s almost no way to eliminate these effects in prod entirely, you can ensure that you greatly reduce them by proper planning.  Currently the best way to mitigate these issues in prod is to catch them before they go into prod.  And the best method for catching these scenarios in pre-prod environments is to perform load testing.  If your load test mimics your prod environment closely enough, then you should see this increased activity and make the necessary changes before it impacts your customers.
  2. Rollout testing – not all DB changes require the system to be down.  And being able to load test with a production-level scenario will ensure that you don’t take unnecessary downtimes because you’ll know exactly what to expect when the rollout happens.
  3. Capacity planning – this is perhaps the most overlooked, but also one of the most important aspects of production work.  Being able to certify your server to a certain number of users or a certain workload is essential in capacity planning.  In these load tests you can increase the number of users, increase the amount of data, etc. to see how your system will behave as your business grows.  You can also see where it will fail first.  So as your business doubles, where will you see the effects first?  Will it be CPU, disk, blocking, etc?
  4. Performance – Not only can you test the perf of the system in general, but you can also test the performance of specific methods of doing things.  If you’re developing a new process and you need to see which method will perform the best, there’s no substitute for load testing it to see which one is best.  And you also see if there are going to be any locking or indexing issues.  For developing new processes, there’s no better way to solidify a new method.
  5. Upgrade – test upgrades of both the DB system itself, but also patches and hardware.  Often times a perceived improvement doesn’t materialize in prod or can even have unanticipated side effects.  Load testing these scenarios can greatly reduce surprises as well as ensure you’re putting efforts where they need to be.