All posts by Sean McCown

I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.

Do I like Speaking? — T-SQL Tuesday #41

TSQL2sDay150x150

Well, I actually have a love/hate relationship with speaking.  I’ve read that there are 2 types of speakers, those who get nervous before going on stage, and liars.  Sure, I don’t expect that you’ll have the same level of nerves after you’ve been doing it for 10yrs, but there’s still going to be a bit of butterflies going on no matter what. 

I think it’s the process I like though.  Coming up with an idea is crucial.  I can never come up with anything when I want to.  My best sessions are inspired and they pretty much write themselves once I get the right inspiration.  However, there’s still getting all the demos ready, and figuring out exactly what I want to say.  It’s a big part of the process.

Next, it’s submitting the abstract.  This can be just as hard as doing the session itself because you have no idea who’s on the other end reading it and coming up with the exact wording that’ll get you picked is a crap shoot most of the time.  I’ve seen popular, experienced speakers get turned down in lieu of “nobodies”.  So sitting there with someone, in my case Jen, and coming up with that poetry that’ll get you noticed over all the others is a fun part of the process.  It’s like going up on a rollercoaster.  You’re heading up that first hill just waiting to see if you’re going on the rest of the ride or if it’s going to break down and leave you sitting at the top having to walk back down.  It’s kinda like a little nerdy xmas.

Now you’re at the event and it’s the day of your session and time to make sure demos are working.  Well, are they?  You get them all worked out, get your environment reset and do it again.  Then reset it again.  Everything’s perfect.  Time to get to your session without a moment to lose.

You’re now on the platform watching people come in and just hoping that you have decent attendance.  Then it happens.  The thing you dread every time you speak.  You see an MCM walk in and sit down in the back.  DAMMIT!  Now I’ve gotta make sure I don’t say anything stupid.  But wait, what’s this?  Another MCM?  He’s coming up to the front and promising he’ll be listening intently.  Then 2 MVPs come in together and sit next to a couple guys from the product team.  ARE YOU KIDDING ME?!?  Ok, session’s over.  I’m putting on Blazing Saddles for you guys instead .  It’s alright.  I can do this.  These guys aren’t here to see you fail, they’re just here to tweet it if you do.  No worries.

So now you start your session and your first demo has an error.  Oh come on man… I triple checked it.  WTx is the problem?  Ok, it’s easily fixable.  No worries.  Everything’s back on track.  The next demo goes well so now it’s smooth sailing.  Now the questions start coming in.  This first one is easy.  The next 2 I haven’t really thought of.  Hope I know enough about this topic off the top of my head to be able to answer it in front of the A-Team down there.  I see they’ve got their thumbs on the twitter buttons on their phones.  Now the thumbs are going crazy.  What did I say?  Was it stupid?  Can’t stop now, time to move on.  Now a complicated topic comes up and while the demo goes well questions start popping up.  Again with the thumbs guys, WTx?  Now the next topic and there are already questions.  A burning look goes to the MCMs.  If you guys even move I’m breaking your thumbs.

Session’s over now.  More questions are coming in and whether you know them or not you’re hitting your stride.  You try as hard as you can to remember some of them so you can add them to your session next time.  The MCMs come up and say they thought it was great and they never really thought about it that way before.  SCORE!!!  You pack up your stuff and leave the room and as soon as you get out you put everything down and check twitter to see what the hell those guys were tweeting so much.  “In session with MidnightDBA.  Backup tuning is awesome.” — “Backup tuning with MidnightDBA.  I’ve never seen that trace flag before.  This is the best!” — “Our tweeting is making him nervous.  Keep it up guys!”

So do I like presenting?  Hell YEAH!

I’m a Fraud

I’m a fraud because I didn’t bust out the query I needed right away.
I’m a fraud because I couldn’t remember the exact syntax to restore a certificate even though I’ve done it dozens of times.
I’m a fraud because a user asked me a question I had to lookup.
I’m a fraud because I still haven’t had time to learn Service Broker.
I’m a fraud because it took me too long to troubleshoot my last PS script.
I’m a fraud because I only study about a half as much as I need to.
I’m a fraud because I haven’t written a book.
I’m a fraud because I haven’t filmed half the new videos on my list.
I’m a fraud because I don’t understand all the ins and outs of SMO.
I’m a fraud because every other MCM and MVP out there is so much smarter than I am.
I’m a fraud because I had to go running to a real .NET guy to have my site rebuilt instead of doing it myself.

And on and on…

See, we’re all a product of the same marketing.  We see all the super smart guys around us and we believe the marketing around them.
Grant Fritchey didn’t have to research anything for his book.  He knew it all off the top of his head. 
Adam Machanic couldn’t write his last book fast enough.  He banged it out like he was signing his name.  And Denny Cherry knew everything for his last security book and didn’t have to look anything up or test anything.

All we see is the final product.  We don’t see the weeks/months of failed attempts, tinkering, research, and begging for help that go into the books, whitepapers, blogs, and videos that people crank out.

Sure, some have to beg, borrow, and steal less than others, but nobody and I mean NOBODY just sits down and bangs out an entire book on the first try.  And nobody writes an entire training course in a single sitting and then just films it.  It takes planning, practice, and experimentation to make everything look easy.  And everything looks easy by that point because it is.  They’ve already learned it.  In fact, sometimes they may have even learned it for the book, or for that video.

This is what’s both good and bad about the way we do things these days.  We have access to so much material there’s just no excuse to not be able to find something.  But at the same time, having access to that much info and that many authors shows us exactly what we don’t know and it’s easy to compare yourself with them and start to feel like a fraud.  And the industry experts aren’t immune to it either.  Trust me, they all do it too.

I always say, don’t confuse your mentor with your mirror.  Find someone in the community you want to be like, but don’t compare yourself to them.  You’ll always lose that battle.  Compare yourself to how you were yesterday, or last week or last month.  Give yourself something solid to accomplish and then do it.  If you really want to learn Service Broker, then make it happen.  Put yourself a training list and start working your way through it.  Then look at your list in say a couple weeks or a month and see how much you’ve gotten done.  Are you a fraud then?  Are you a failure?  Maybe, maybe not, right?

So is it possible to actually be a fraud, or a failure?

Of course it is.  In my estimation, anyone who’s been a professional DBA for 10yrs and doesn’t even know the basics is a complete fraud.  You have to know something about your craft, and you can’t tell me that in 10yrs you’ve never had a chance to learn the difference between char() and varchar(), or how to type a backup command, etc.  If you’ve been doing SQL for that long and you don’t even know simple basics like that, then yeah, you very well may be a fraud and you need to do something about it.

However, if you think you’re a fraud because you’re comparing your on-going process of building yourself to the finished product you see the industry experts churning out, then you just need to compare yourself to the right thing.  Making that comparison is like starting to build a skyscraper and comparing it to the one next to yours that’s finished and just disappears into the clouds.  What you don’t see though is the construction going on on the other side of those clouds, or the remodeling happening on several of the floors, or the bad plumbing.

So keep your head down and build your own tower.  Look to the others for inspiration and guidance, but don’t compare yours to theirs. 

Hey, if you want a really want to see how perfect the experts are from the start, then just ask any of them, and I do mean any of them, if they’re embarrassed by the code they wrote 10yrs ago. 
Almost without exception, they’ll all say yes.

So now it’s your turn… why are you a fraud?

Finding Triggers in Powershell

I was helping someone out with something so I thought you guys would like to have these 2 short lines of code that can be pretty useful.  Here I’m going to show you how to check your DB for tables that have triggers, and then for tables with no indexes.  Do with them what you will.

First, let’s talk about triggers and how to avoid them and why.

Triggers are a tricky thing to support becuase the complicate things.  They can be nested and you won’t even realize it.  Their firing order isn’t even guaranteed.  You can specify the first trigger to fire, but no ordering after that.  So you could see intermittent perf problems from both of these aspects of triggers.  And the reasons I usually see people putting triggers on tables aren’t even valid.  They need something to happen in another table like a audit or maybe they even need to change a value in the current row.  That’s the one that drives me crazy the most.  If you need to update a value you just inserted, then just change it before you insert it.  Inserts should be as fast as possible and putting an update inside a trigger for a row you just inserted is just stupid.  Put that code in an SP and do whatever you need in params so by the time you get to the insert everything is as it should be.

And if you need to do something in another table, like auditing that an action took place, then again put that code in an SP and just do the 2nd table write after the main insert.  This really isn’t hard to figure out.  In my career I think I’ve seen maybe a couple honest uses for triggers and I can’t even remember them right now.  But pretty much every time you need a trigger, you would be better served by putting several statements into a single transaction in an SP.  At least someone could easily see what’s going on and you can tightly control the order of ops.

So to find all the tables in your DB with triggers, just go to the tables node in PS and run the following code.  Once you get it you can do anything with it you like.

dir | ?{$_.Triggers -ne $NULL} | ft schema, name, triggers -auto

Easy, right?

Now, if you do a get-member you’ll see a handful of properties that give you info on triggers.

dir | gm

HasAfterTrigger
HasDeleteTrigger
HasInsertTrigger
HasInsteadOfTrigger
HasUpdateTrigger

And of course you can use those, even for what we’re trying to do here, but it makes the code a little long to be practical.  The problem is you have to compare all of those properties. 

dir | ?{$_.HasAfterTrigger -eq $True -OR $_.HasDeleteTrigger -eq $True -OR $_.HasInsertTrigger -eq $True -OR $_.HasInsteadOfTrigger -eq $True -OR $_.HasUpdateTrigger -eq $True} | ft name, triggers -auto

Both of these lines of code do the same thing, but one is just less tedious than the other.

And you don’t have to filter it at all.  You’re free to just look at the raw data if you like:

dir | FT schema, name, triggers -auto

Ok, that’s it this time.

To AutoGrow or not?

I just got this question in the user group and thought I’d write a blog instead of just answering a sub-set of users who could benefit from it.  The question was:

I have customized the values of the Auto growth according to the size of the database and the rate at which it grows. I have noticed that Auto growth kicks in about every 3 months – 6 months on an average. Is that OK? I have read articles where the advice on it ranges from “Auto growth is OK” to “Auto growth should kick in only during emergency”.

This is one of those topics that comes up again and again, unlike AutoShrink which I hope is settled by now.  I suspect it keeps coming up because there’s no real solid answer.

Ok, so whether or not to AutoGrow your files.  I’m going to talk about both data and log files together unless there’s a difference.  So unless I call one out over the other, I’m talking about them both.

You should definitely use AutoGrow.  And you should definitely NOT use AutoGrow.  That’s my way of getting around saying “it depends”.

It depends on a few factors really.

1.       What you’re going to do with the files.

2.       How big your environment is.

3.       How many other files are on the drive.

4.       How much activity is on the files.

5.       Monitoring method

 

Maybe there’s more, but that’s all I can think of right this second, but you get the idea.  Ok, so let’s go through them one at a time.

1.     What you’re going to do with the files.

From time to time I step into a shop where the DBAs have bought into this idea that AutoGrowth is bad so they have some job setup to monitor the size and they grow the files manually.  Now while that sounds like a good idea, it can cause more problems than it solves.  Let’s look at a scenario I’ve encountered more times than I care to try to count.  You get a request from a group to restore the DB to a dev or maybe a QA box so they can do some testing.  This is a common scenario, right?  I’ve done it plenty in almost every shop I’ve been in. 

So you go to do the restore and it fails telling you that there’s not enough space on the drive.  You look and there’s not that much data in it so it should fit on the drive right?  No, not right.  The drive has to support the size of the file, not the size of the data.  So if you’ve got a 50GB drive, and a 100GB file it will fail even if there’s only 20GB of data in that file.  So now what do you do?  Do you go beg the SAN guys for more space or do you manage your files in smaller increments?

With AutoGrow you can set your own growth rate, so you can have it grow the files at whatever interval you want.  So how is manually growing the file going to add anything to the equation here?  And with Instant File Initialization (IFI) you don’t even have to worry about incurring the cost of zeroing out the file unless it’s a log.

Now, for log files specifically, I know some of the top experts say that you can have perf problems if you grow your files too much and get too many VLFs, but honestly that problem really doesn’t come up that often.  And logs are very volatile.  Lots of things log activity that you don’t realize and I wouldn’t want the log file to rely on me.  And again, I can’t stress too much that it really matters what you’re going to be doing with the files.  If you’ve got an extra 60GB of space in your log because you’re afraid of VLFs, then you’ll need that extra 60GB on every other system you plan to restore the DB on.  And you may not be afraid of the VLFs on those lower-level servers.

2.      How big your environment is

Now let’s talk about large enterprise environments.  I tend to be in really large shops with hundreds or thousands of servers.  And I don’t know about you, but I don’t wanna spend my time managing file growths.  Consider my last environment where I had over 900 servers with over 4,000 DBs spread across all of them.  And that was just prod.  I’m not going to do that kind of analysis on all of those servers and manually grow all of those files.  And it’s honestly just ridiculous to even try.  There are 2 ways I could solve a problem like this.  

 

I could develop a process where I monitor the free space in all the files, and when it reaches a threshold it grows the file by a certain amount.  Hell, that’s just a homegrown version of autogrow isn’t it?  So that’s not a solution really. 

 

I could also use autogrow on some of my boxes and manually grow my really important or trouble boxes.  And again we’re back to “it depends” aren’t we?  What we’re saying here is it’s ok to use autogrow on some servers and not on others, which means there’s no solid answer.  You just can’t spend all your time growing files.  Use autogrow here unless you have a reason not to.

 

3.     How many other files are on the drive?

This argument may or may not have any teeth… it just depends on how you look at it.  The main reason for manually growing your files on drives where you’ve got a lot of other files is for the fragmentation.  And here I’m talking about fragmentation at the filesystem level, not inside the files themselves.  If you’ve got you files on a drive with lots of other files and they’re all growing, then they’ll be growing over each other esp if they’re growing in smaller increments.  So you could fragment your drive pretty easily and that can definitely cause perf issues.  So the solution is typically to manually grow the files to a larger size so it reduces the amount of fragmentation you create when they do grow.  And that does have merit, but why not just set the AutoGrow setting higher then?

I can see a reason why you wouldn’t.  If there are a lot of DBs sharing that drive and they all grow fairly often, then you wouldn’t want to AutoGrow it to a certain size and have it fill up too much of the drive and starve the other DBs.  The most logical way around this issue though is too fold:

 

AutoGrow at smaller increments.  Unfortunately, this may put you back in the fragmentation scenario though.  If you go this route then you need to defrag the drive on a regular basis and you should be ok.

 

Split those DBs off onto their own drives.  This is the best solution because you get stuff for free.  Things like simplified space mgmt., 0% fragmentation, and I/O isolation are all things that come along for the ride when you put DB files off onto their own drives. 

 

However, all that said, if you can’t put the files on their own drives and you’re really afraid of starving the other DB files, then your only real choice may be to monitor the size and grow manually.  But this shouldn’t be the norm if you’re in a big shop.  Keep this kind of activity to a minimum if you can help it.

 

4.        How much activity is on the files.

This one is almost like the other one, only this doesn’t necessarily rely on what else is on the drive.  This counts even if the file is on its own drive.  If the file grows a lot every day or every week, then you don’t want to take a chance on missing an email alert or whatever else you use and having the file fill up because you didn’t grow it.  So while there may be some exceptions, my skills are better spent elsewhere than growing files manually.

 

5.        Monitoring method

Many shops monitor with 3rd party tools and those tools monitor disk space.  However, none of them are smart enough to know the difference between a full drive and a full file.  You could have a 100GB file with a 99GB data file on it and the alarm will trip even if the file is only 3% full.  And depending on whether or not your monitoring team is friendly, they may or may not help you out by either turning off the alarm on that drive, or doing something so that it knows something about the space in the file.  I’ve honestly worked with both friendly and unfriendly teams.  So I could either setup an outlook rule to ignore all space alerts (bad idea) or shrink my file back again so it didn’t trip the alarm.

 

Conclusion

So you can see there are several factors involved with this decision and chances are you’ll have a mixed solution.  I’ve worked in shops where I never managed space at the file level, and shops where it was very necessary, and everything in between.  For me #1 above is one of the biggest deciding factors.  I’m constantly fighting DBAs growing files a lot to be proactive and then we can’t restore to any of the other environments.  Even DR becomes an issue because you have to have that space anywhere you restore those DBs.  And that’s a lot of extra space to keep on hand for such little return.  Don’t get me wrong, I’m not a big fan of thin provisioning either.  I think that’s going a bit far, but it’s basically the same thing at the SAN level.  This provisioning is AutoGrow for the LUN itself.  And the biggest problem I have with it is that they tend to not grow it enough or they set the threshold too high so the file fills up and brings the DB down while you’re still waiting for the LUN to expand.  If they can get it right though it’s not the evil it used to be.  So what we’re really doing with AutoGrow is we’re thin provisioning our DB files.  And that’s actually much easier with IFI because they expand in just a couple seconds.  That’s only for data files though.  Log files still have to be zeroed out so you can run into the issue now and then where the log file is still growing when the process runs up against the end of the current file and everything stops.  Hey it happens.  Those are the cases where you might consider manually growing your log files.  These would be more DSS type systems where it’s unlikely that you’ll restore it to a different box.

Having huge files can also slow down your DR plan.  If you’ve got a huge log file and a 30min SLA, you could easily spend more time than that zeroing out your log file.  So you’ve orchestrated that you’ll miss your SLA just by trying to make sure you don’t run into an almost non-existent VLF issue.  So you’ve got to consider that too.

 

So anyway, I hope this helps you at least consider the different factors involved in making this decision.  Leave me comments if I’ve messed something up really badly.  Or if I’ve gotten something really right.  Hell, just tell me I have great hair and call it a day.

 

 

What is an Overload?

When I teach powershell the question I get from DBAs most often is what is an overload. Because when we get into methods you can’t go very long at all without stating that they’re overloaded and most DBAs just don’t live in that world enough to know programming terms like that.

Overloading is when a function, method, or whatever can be called with different sets of parameters. Let’s take a good example of calling the ChangePassword() method under the Logins context. I’m gonna walk you through this step by step because I want to make sure you understand.

Let’s start by opening SSMS and right-clicking on Logins. Then to go Start Powershell.
You’ll be presented with a prompt in the Logins context. Now, if you do a get-member you’ll see a list of properties and methods. We’re concerned with the methods here. As a quick aside, a property is an adjective. It tells you something about the object, while a method is a verb. It’s something you can do to the object. So in our case you can see several properties like SID. SID is a property of a login… it describes something about the login. But the method ChangePassword() is a verb. It’s something we can do to the login. Methods are like functions in SQL more or less. Anyway…

So here’s a query you can type to get down to the info we’re interested in. I’m going to narrow the results down to just the ChangePassword() method we’re interested in so we don’t have to pick through a bunch of results.

dir | gm | ?{$_.name -eq "ChangePassword"} | FL

Here’s a screenshot of the results.
LoginsGM

Ok, now we can actually talk about the overloads. Notice that in the Definition column you there are 3 ways to call this method.

1. System.Void ChangePassword(string newPassword)
2. System.Void ChangePassword(string oldPassword, string newPassword)
3. System.Void ChangePassword(string newPassword, bool unlock, bool mustChange)

In the 1st one you can simply pass in the new password.
In the 2nd one you can pass in the old password and the new password.
and in the 3rd one you can pass in the new password along with a bit to unlock the acct and whether they must change it when they login next time.

Each one of these sets of parameter combinations is called an overload. Sometimes you’ll see a method that has no overloads. That means there’s only one way to call it. The Refresh() method is like this. It’s not overloaded because you can only call it one way… with no parameters.

dir | gm | ?{$_.name -eq "Refresh"} | FL

RefreshGM

You can see from the output here that there is only one item listed in the Definition column. There’s only one way to call the Refresh() method, and that’s with no parameters. If you try to call it any other way you will get an error.

So a method can either be overloaded or not. And this is how you know how to call any method. You do a get-member on it and look at its definition. You’ve probably noticed by now that the definition also gives you the data types and functions of the parameters it’s expecting. Above for the ChangePassword() method you can see that the newPassword parameter is a string. So you automatically know you’re going to call it like this:
ChangePassword(“MyN3wPassw0rd!!!”)
And you know the 3rd overload takes a string and 2 booleans. So it’ll look like this:
ChangePassword(“MyN3wPassw0rd!!!”, 1, 1)
or like this:
ChangePassword(“MyN3wPassw0rd!!!”, $true, $true)

Ok guys, that’s about it for overloads. I wanted to give a nice thorough explanation for the beginners out there. Sometimes these concepts can be a bit foggy to understand and .Net guys always explaining things with .Net terms doesn’t really help. So I wanted to try to put it in nice plain English.
Let me know if you have any questions.

An Upgrade Disaster

I got an email from a user at another company today and he told me about the SQL 2012 upgrade they just finished. And apparently it was a disaster. Here’s the general gist of what happened.

They have a huge app that runs their entire business and the vendor talked them into upgrading to 2012. Originally they were slated to do tons of testing and upgrade probably sometime in november. But they decided to not listen to their DBA and instead allowed themselves to be lead by the vendor who told him that SQL upgrade was easy and nothing to worry about. So they did some perfunctory testing and pushed the upgrade to this past week. I know, smart right?

So this vendor did their upgrade for them and it completed ok from what I know about it. The problems came after the upgrade. Now, I don’t have any real specifics, but I do know that it caused a 10hr downtime. One of the directors asked about the backout plan and he was politely told to mind his own business. Everyone is calling the upgrade a disaster. They didn’t have any way to restore in case the upgrade failed in a really bad way… and that means no final backup, no scripted objects, and no mirrored system. This was an in-place all or nothing upgrade.

Just so we’re clear on this, that’s not the way you run an upgrade. Upgrades take plenty of testing from the DB side, and the app side. You should never upgrade anything without extensive testing. And you should always have a backout plan. Now, what does a backout plan really mean? Because I find that often times the backout plan gets overlooked and I think it’s mainly because they have a hard time defining it.

To me a backout plan means a few different things depending on what you’re after. Let’s take this upgrade as an example. No matter how good you think SQL upgrade is, there’s always something that can go wrong. So at the very least, you need to take a final backup of ALL the system and user DBs right before the upgrade. Make sure you kick everyone out of the DB first because it’s not a final backup if there are still going to be modifications afterwards. That’s a good start for sure, and what I’d consider to be a minimum effort. Here’s a quick list of the steps I like to take for an important upgrade such as this:

1. Copy all system DBs to another location. This just makes restore much easier because with DBs as small as most system DBs, you can just drop them back in their original location and you’re good to go.

2. Script all logins with SIDs.

3. Script all jobs.

4. Make sure I have all SSIS pkg projects at the ready so I can redeploy all pkgs if I need to.

5. Do a test restore of the final backup before starting the upgrade.

6. Script any system-level settings like sp_configure.

7. Script any repl, log shipping, mirroring scenarios.

8. Make sure I have pwords to any linked servers. While I try to keep everyone off of linked servers I have to admit they’re a part of life sometimes. And you don’t want your app to break because you don’t know the pword to the linked server. It’s not the end of the world if this doesn’t happen, but it’ll make life easier.

So basically, the more important the DB, the more of these steps you’ll follow. You need to prepare for a total meltdown and make sure you can recover in as timely manner as possible. As I sit here and write this I feel stupid because it seems so basic, but there are clearly those out there who still need this kind of advice, so here it ia.

And if you have a good test box handy, make sure you test as many of these procedures as possible. Script out your logins, etc and restore them to the test box and see if things work as they should. Deploy your SSIS pkgs to your test box and make sure they run, etc. Don’t just rely on what you think *should* work. Actually make sure it works. This is why some upgrade projects take months to complete. It’s not the upgrade itself, it’s all the planning around it. And while this isn’t a full list of everything you could do to protect yourself, it’s a damn good start.
Happy upgrading.

The Ultimate DR Plan

I know the world is full of DR plans and DR advice, but this is honestly the last DR plan you’ll ever need. I’m not sure what made me think of it, but it popped into my mind on the way home the other day.

We all have web content. Some of us only blog, some of us run websites and blog, etc. And some of us have our stuff hosted while others host our own, while yet others do a combination of both. Personally, mine is kind of a mixed bad. I used to host my own content but my home pipe couldn’t handle it anymore so I had to push it to a host. I still control my own DNS though.

So here’s what hit me the other day. What’s going to happen to my content when I die? I have 2 major websites and a couple blogs, and currently I’m the only one with login info to any of them. Well, I suppose you could say who cares what happens to my content once I’m dead. After all, I’m dead. But we plan for all kinds of things upon our death, so why not this? Do you really want all the content you spent years churning out to just go away for lack of payment on your host? Sure, there won’t be any new content by you, but you can try to ensure that someone carries on your site and at least keeps it running. Oh, I know it’s not the biggest thing you have to worry about, but at some point you should at least consider it.

My initial thought was I have Jen to fall back on. She’s my DR plan. But what if we bite it together? We’re on a roadtrip somewhere and some drunk driver wipes us both out. Then she’s not there to back me up that’s for sure. So what happens to MidnightDBA.com and ITBookworm.com? I really want them to kind of carry on for as long as they can. Maybe whoever takes it over will get a community of DBAs to pick up the slack and continue posting.

Look, I know this is morbid, but you guys deserve it.

One thought is to get someone from the community whom I trust to pick up the sites and keep them alive as long as they can (or like). That would only really entail giving them my admin accts and periodic source code. That part’s easy. But then it dawned on me. How would anybody know we weren’t around anymore? We live out here in Dallas and as far as anybody knows if we don’t show up to the user group for a couple months we’re just busy. If we don’t go to any sqlsats for a while we just can’t get away. If we don’t answer emails or tweets maybe we’re just offline trying to simplify our lives. And the same goes for not answering the phone. The point here is that we really don’t have anyone who would be there to tell the community (or even our good friends) that something happened to us. Jim (Jen’s brother) might eventually think of it, but it wouldn’t be his first thought. Who knows how long it would take him to get on that.

And a lot of us are in that same boat. There are plenty of people in SQL that I almost never see or hear from so how would I find out they were dead? Maybe they don’t have a spouse who’s close to the community and wouldn’t think to inform anyone. And as far as we know they just fell into working really hard and aren’t showing up for any of the normal stuff.

So anyway, I don’t have all the details worked out yet, but I’m going to be slowly putting something together and trying to cover a few of the bigger bases. Because I don’t want my sites to just die when I do. I kinda want people to get something out of them until the content ages out.

So what are your thoughts on the matter?

What an idiot!

As DBAs we quite often run into others who aren’t as smart as us.  The dev is an idiot.  The .net guy is an idiot.  The users are idiots.  The manager is an idiot.  The VP, well don’t even get me started.  And other DBAs are really idiots.  At least that’s how it is in our heads anyway.  We fall into this cycle of calling everyone idiots for every little thing they do wrong.  The dev uses a wrong data type and it makes a few queries a lot slower, what an idiot, he should’ve known better.  A .net guy uses EF instead of putting it in an SP and it causes tons of blocking, what an idiot.  Another DBA tries to fix a DB that’s down and he does something that ends up making it worse… what an idiot.

It’s pretty easy to say everyone’s an idiot when we have the luxury of hindsight isn’t it?  Sure, I could have told you that every single one of those decisions was wrong and why.  But could I have told you before you did it and it went south?  Maybe, maybe not.  I’ve made plenty of mistakes in my career (and am dedicated to continuing) that in hindsight weren’t the best option, but was I an actual idiot for doing it?  Again maybe, maybe not.

I just think we jump on the idiot bandwagon too early and too often.  And I know I’m a big offender.  It doesn’t take much for me to start branding people left and right, but I also try to temper it with some reason.  Just because someone doesn’t have the same experiences I do doesn’t make them an actual idiot.  A dev chooses the wrong data type for a column.  Is he an idiot, or does he just not have the same experience with data types that I do?  I’d have to say it depends on what the mistake was.  Did he choose a varchar(25) for Address, or did he choose datetime?  Because one makes him less experienced with addresses and the other one makes him pretty close to an idiot. Well, what if he chose the bit data type for a SalaryAmount column? Well, I can only hope that he’s writing the table his own salary will be stored in.

I’ve seen plenty of things that seemed to be basic that I didn’t know. And that’s becuase there’s just so much to know it’s hard to quantify. That’s why I make sure I interview everyone I see for at least an hour before making a decision. I honestly believe you can’t judge the sum of someone’s experience in just a handful of questions. In fact, I’ve found plenty of guys who got the first 20 questions wrong and then we suddenly got to their area of expertise and they started blowing the questions out of the water.

So anyway, just give some of these guys a break and realize that they may not be complete idiots just because they don’t know something you don’t. That’s not to say there aren’t any real idiots out there. You guys know that I’ve definitely run into my fair share of them. But I’m trying harder to lighten up on them.

Part of the problem is the learning process we go through, which is next to none. Computers are hard. SQL is hard. .NET is hard. They’re all hard. And yet training is so poor. I’ve seen so much IT training I can’t even count, but the number of courses I’ve been in that actually taught the topic is very few. sure, the high level stuff gets taught, but the hows and whys of doing things is rarely covered. There are some guys out there who really take the time to break it down for you, but try to find one of them. One of the biggest reasons I never got into BI is because all the BI guys teach beginning BI like you’re already a BI expert. They explain BI terms with other BI terms and everyone just nods and smiles. But I guarantee you that most of them walk away without a good understanding of what was just said. .Net guys are big offenders in that area too. They explain .Net to you like you’ve been a coder for years and you’re just supposed to know what all this stuff is. So it’s no wonder that so few people really know their jobs well. They’re never taught what they need to know. So are they really idiots for not knowing something they weren’t taught? There are so many things that can go wrong with a system at any given time how can they be sure that the issue is being caused by a bad data type, or by one particular piece of code? There are of course ways to find out, but so many companies are in such a hurry to move on to the next project they never get a chance to dig into these issues. And again, were they really taught how?

So here we are in the middle of the learning revolution and there’s so little quality training to be had. You can go almost anywhere and learn how to perform the steps for a task, but where do you go to learn what you actually need to know? How do you learn that one thing is stupid over another thing, and that other thing exists for a reason, so when is it supposed to be used? I was talking to someone about this very topic just this morning.

So this whole thing was prompted by a training session I had with someone not long ago. Someone did something they shouldn’t have and when I corrected them they asked why. And when I gave my reason he said oh y, I never thought of that. And I could clearly see that he wasn’t an idiot, he just didn’t have the experience he needed. And since then he’s done it right and even did it the other way a couple times because the situation was different. See, I gave him the reasoning so now he can reason out for himself when to use one method over another. And that’s training that’s worthwhile.

To me, a true idiot is someone who gets shown the way to do things right and still refuses to employ them. He is also someone who has been in his current career for many years and doesn’t even know the basics. I have very little patience for say a SQL dev who’s been doing it for 10yrs and doesn’t even know the basics of the data types. Because you can’t tell me that it’s never come up. I also don’t like DBAs with 10yrs behind them who can’t write a simple backup statement. Again, that’s a basic that you should know cold.