All posts by Sean McCown

I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.

Test your Powershell prowess

Ok, well I’ve done quite a few PS posts now, and while it’s great to learn, it’s also great to test yourself. So here are some exercises you can use to test your skills. These are all very common PS taskes for enterprise admins. Some will be easy, and some will be more difficult. So here, take some time and see how it goes. And if you like, post your answers in the comments of this post for everyone else to see.

I’ll post my answers in a separate post. And again all these solutions will be in PS

1. Get the total size of all the DBs on a server.
2. Cycle through a list of servers and get the service account that all the SQL services are set to start with.
3. Script all the SPs in a DB to a txt file. Make sure they’re separated by GO stmts or the script won’t run.
4. Change the default file location for a group of servers.
5. Cycle through all the jobs on a server and change all their owners to sa.

Alrighty… that’s about it. Good luck.

Daddy is happy

Sometimes there’s a method to my madness. I’ve got a Jr. on my staff who is just now learning the ropes. Now, I don’t mean a Jr. in the sense that he just doesn’t know as much SQL as I do, I mean it in the real sense. He actually knows practically nothing. He’s not even a big Windows guy.

So teaching him can be challenging because where do I start? Do I start with windows, networking, data types, what? Well, I decided to start him on backups. Like many times before I figured that would give him the best chances to learn the basics. See, in any large shop, you do backup/restore constantly. There’s always someone needing a DB backed up or restored to one location or another. So there’s plenty of opportunity to practice. But practicing backup/restore is just one of the reasons I throw beginners into that area. The other reason is to teach them good solid basics all around.

Because along with the plethora of backup/restore in a large environment, there’s also a plethora of problems that go along with it. You try to backup to a location and find out you don’t have enough space so you have to decide what goes and what stays. You try to restore to a DB and you don’t have enough space, so you have to restore different files to different drives. You find one acct doesn’t have permissions to a location, so you have to take care of that. Then you find out that backups have been failing on these boxes over here so you have to look into what happened there. Then you have to work with the tape guy to bring some backups from offsite and try to find space on a server to restore them. Maybe even you have tricky restore that requires some actual thought too. The point is that there are so many pitfalls in backup/restore, that it’s an excellent place to start a beginner. Without even knowing or trying, they learn networking, windows security, SQL security, backup/restore syntax, file rotation methods, space mgmt, DNS, trace flags, and more. It’s a fabulous way to begin your career as a DBA.

Now, why am I so happy? Well because in this case my Jr. was talking to someone and trying to explain to them why he couldn’t restore their DB the way they wanted. He actually did a pretty good job. When he was finished, he looked over to see me smiling at him. He instantly said, what? I said, oh nothing. Then he’s like, then why are you staring at me. I said, did you hear what you just said to that guy? He goes, no. I was just explaining to him why I couldn’t restore his DB the way the needed. I said, ok, and why is that. He went on to explain to me what the problem was. I said, ok then how can we fix that. He said, well under the current circumstances we really can’t. That’s what I was trying to explain to him. I just smiled again. He said, what? Why do you keep doing that?

I said, look at you. 2 months ago you wouldn’t have been able to do that much less explain it. I never taught you that, where did you come up with it? He said, well I’ve done it so much and I’ve just learned that… all of a sudden his eyes got really wide. He looked at me and said… OHHHHH so that’s why you want me doing all the backup/restore. Because I learn all this other stuff with it. I said now you’re getting it. And it’s just cool to see that it’s working.

This is what I’m talking about when I talk about mentoring. A good mentor can show you where you need to go in the order you need it. And I’m not bragging, I’m just saying that it takes more than just reading books. You have to be with someone who’s been there a couple times. He can teach you how to think like a DBA. Hell, I’ve learned all my .NET from books and blog samples, and I actually suck. I get things done, but I’m no real coder in any sense of the word. Well, I’m on the receiving end of that now cause our big .NET guy at work ahs been showing me things and actually mentoring me a little, and it’s really cool the things I don’t nkokw. So I’m even getting it a little myself. Cool stuff that.

Dropping DBs in Powershell

As long as I’m on a roll here with server-level ops, I thought I’d go ahead and cover dropping DBs in PS. This should be a fairly short post as there’s not much to this, but it’s worth talking about anyway.

Dropping DBs isn’t any big deal in T-SQL:

DROP DATABASE SeansDB

And the PS method for doing this is of course a 1-liner, but it’s not as easy as T-SQL. Again there’s more than one method for doing it and I’ll show you both of them. Here, whether you connect through sqlps, or through PS proper, you want to be at the Database node. Here’s a screenshot of doing it from sqlps, but again, it really doesn’t matter.

Method 1

dir | %{$_.Drop()

There are a couple things to note here. First of all, this is typical PS, so if you run this cmd as it’s written here, you’ll drop every DB on your server… totally uncool. And 2nd, I’m always talking about how if you see an Alter() method on the get-member list then that usually means it wants you to use it before the changes are pushed to the server itself. Well, this is one of those times that makes me put the ‘usually’ in there because while that’s a good rule of thumb, PS is nice enough to drop objects when you ask it to. So anyway, unless you want to lose everything, I wouldn’t run the code above. I just wanted to have a basis for future syntax.

So all we have to do now is limit our result list to the DB we’re interested in:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.Drop()

It just doesn’t get any easier than that. Now, at this point T-SQL is still ahead, cause even I would still much rather use T-SQL for dropping a single DB. Powershell is going to pull ahead pretty fast when it comes to dropping several DBs, or even a single DB on multiple servers.

Let’s say you’ve got several test DBs, and they all have the word ‘test’ in them somewhere. Since different devs created them, they don’t have a solid naming convention. Hell, some of them may even just have ‘tst’ in them, who knows, right?
At this point it’s just a matter of altering the above script so that it can accommodate your criteria.

dir | ?{$_.Name -match "test" -or $_.Name -match "tst"} | %{$_.Drop()

T-SQL would require you to code a cursor for this, and while the for-each is technically a cursor in PS, it takes next to no coding for us. PS is starting to pull ahead a little now. And by simply changing the where-object criteria, you can easily change this script to do things that are more difficult in T-SQL like dropping DBs that are all owned by a certain login, or were created on a certain date, or are under or over a certain size, or even haven’t been backed up recently. Of course, some of that criteria you’d never use, but it’s all there for you. And again, you can find out what you can use by doing a get-member and anything that comes up as a property is usable.

dir | gm}

Method 2
Now let’s look at a shorter method for dropping a single DB.

(dir SeansDB).Drop()}

That’s pretty easy, and to do multiple DBs, it could look something like this:

(dir | ?{$_.Name -match "test" -or $_.Name -match "tst").Drop()

Now, if you have a DB that exists across multiple boxes then you can drop all of them like this:

$a = "server1", "server2", "server3"
$a | %{
$ServerName = $_; ## Just setting this to make the script easier to read.
cd sqlserver:\sql\$ServerName
(dir SeansDB).Drop()}

And that’s it. Going against multiple boxes is so easy. And the way I’m doing it by setting the $a var to the server list, you can easily populate $a anyway you like and you don’t have to change the rest of the script. So make it the results from a SQL query, or from a txt file, etc. It doesn’t matter.

OK, so that’s it this time. Be careful with this or you’ll find yourself doing a recovery on all your DBs. Man, it would be really easy to be evil with this huh? In less than 60secs you could kill every DB in your company if you wanted to… say if they fired you for no reason and you wanted to get back at them. I’m not saying you should, I’m just saying it’s possible.
And after something like that I’m compelled to say (again) that I’m not responsible for your (mis)use of any of these methods.

Killing SPIDs in Powershell

Today we’re going to continue our exploration of the server-level in PS.  You’ll remember before we’ve explored it a little bit when we talked about querying cluster nodes.

So today we’re going to kill users in a DB.  This is incredibly easy to do in PS.

First, let’s take a look at the T-SQL counterpart. Pretty much everyone has a script like this.

DECLARE @currUser varchar(100),
		@SQL nvarchar(200)

DECLARE Users CURSOR
FOR select DISTINCT spid from sys.sysprocesses
WHERE [program_name] LIKE '%businessobjects%'

OPEN Users

	FETCH NEXT FROM Users INTO @currUser
	WHILE (@@fetch_status <> -1)
	BEGIN

		SET @SQL = 'KILL ' + @currUser-- + ''
		EXEC (@SQL)
		--print @SQL
		
FETCH NEXT FROM Users INTO @currUser
	END

CLOSE Users
DEALLOCATE Users

And of course, the problem is as it always is in T-SQL. It doesn’t scale to several boxes, and if you find yourself w/o your script, it’s not not really easy to reproduce off the top of your head unless you’re one of those freaks who can write cursors w/o looking at a map. And I don’t know about you, but I quite often find myself w/o my scripts, so I like things that are easier to reproduce.

And like I said, PS scales very well to multiple boxes. Why would you want to run it against more than one box you ask?
Well, let’s say you’ve got a single app server that hits multiple DB servers and you want to kill them all for a maint window. You can’t assume that stopping the service on that app server will kill the DB connections. In fact, it quite often doesn’t. There are other testing scenarios where this could be useful, but I’ll let all of you use it as you see fit. The point is it’s here if you need it.

So to do this in powershell:
Start by connecting to a DB at the server level (either in sqlps or in PS proper… my examples will be in sqlps, but it really doesn’t matter as long as you connect).

In sqlps you can do this by doing a right-click on the servername and going to ‘Start Powershell’.

Then you just need to go up one level like this:

>cd..

Now you’re ready for the command. 

So let’s say you want to drop all the spids connected to SeansDB.  And as usual, there are 2 ways to do this.  I’m going to show you both of them just for completeness.

Method 1:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.KillAllProcesses("SeansDB")}

Now, those of you who know PS, know this is actually quite wasteful. What you’re doing is getting a list of all the DBs, then filtering it down to one and then running the method.
And of course since you call the method with the DB name to begin with this is actually useless. However, it’s still a legal method so I wanted to mention it.

Method 2:You’ve got to think about what you’re doing here so you can make the right decision.  When you do this in sys.sysprocesses, you’re working at the server-level, so the only place individual DBs come into play is with the result set of whatever cursor you write to limit your results. And quite often killing all the spids in a DB can be very useful. So here’s a better way to go about it in PS.

(dir).KillAllProcesses("SeansDB")

This is easy to remember, and easy to type. And of course, it’s obvious what it does… it kills all the spids connected to SeansDB. Now, you can also kill just a specific spid like this:

(dir).KillProcess(69)

And that’s how PS can bring that long t-sql cursor down to almost nothing. What? What’s that? You want more? You want to be able to kill spids based off of application, or CPU, or some other criteria? Well, now you’re just being demanding, but I think PS can do something for you. This is just going to take an extra step and here’s what it’ll look like.
In our example here let’s kill spids by application. So we’ll kill all users who are connecting through SSMS.

$a = (dir).EnumProcesses()
$a | ?{$_.Program -match "SQL Server Management Studio"} | %{$_.KillProcess($_.SPID)

Now, there’s a treasure trove of stuff in here. Let’s take a look at this in detail, especially the EnumProcesses() method.
This method is overloaded so you’ve got some nice functionality built-in.
Here are the documented overloads:

$a = (dir).EnumProcesses() ## Gets all processes.
$a = (dir).EnumProcesses($False) ## Excludes system processes.
$a = (dir).EnumProcesses(69) ## Get info on a single spid... 69 in this case.
$a = (dir).EnumProcesses("domain\user") ## Gets processes run by a specified login.

And now that you’ve got your list of processes, you can do a get-member on them to see what properties are available to you. So remember above when we killed spids by Program? Well, you can kill by anything returned as a property from your get-member. Here’s a screenshot of the results I got from mine.

Killing processes in powershell is so easy it’s almost makes me feel stupid for documenting it. And while you may be used to doing things like this in T-SQL, give this a try and I think you’ll find you like it much better once you get used to it.
And I mentioned that it scales to multiple boxes really well, and it does. I just didn’t show you that here cause it’s pretty easy to figure out.

And DO use this with care. It’s so much easier to kill everything on the box in PS than it is with T-SQL. And I’m not taking any responsibility for how you (mis)use this knowledge.

Checking for Active cluster node

Ok, I was on this blog today and it showed a cool Powershell method for checking which cluster node is active.  And while there’s nothing wrong with the script, it does go to show the thing I dislike the most about how many DBAs work with Powershell in SQL Server.  Now, I can’t stress enough that I’m not picking on this guy, and I’m not saying he’s wrong.  I’m just saying that I prefer to rely on the built-in methods of the provider because it’s just simpler.  Here’s what I mean.

I’m going to borrow his code for a moment just to show you an example.  But I encourage you to visit his blog yourself as he’s got some cool stuff out there.  Anyway, here’s his method for doing this in PS:

# Set cluster name
$cluster_name = "ClusterName";
# Load SMO extension
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null;
$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $cluster_name;
# Get server properties
$properties = $srv.Properties
$owner_node = $properties.Item("ComputerNamePhysicalNetBIOS").Value;
$is_clustered = $properties.Item("IsClustered").Value
if($is_clustered)
{
	Write-Host "The current active node of $cluster_name is $owner_node.";
}
else
{
	Write-Host "$cluster_name is not a clustered instance of SQL Server.";
}

Overall, there’s nothing wrong with that except it’s just too long.  See, too many DBAs forget what Powershell is all about so they’re constantly re-inventing the wheel.  I like to use the kind of wheels I’m given whenever possible.

Here’s how I would tackle this same task (there are actually 2 ways):

Method 1

>$a = dir

>$a.ComputerNamePhysicalNetBIOS

Method 2

>(dir).ComputerNamePhysicalNetBIOS

And that’s it.  Why would you want to type all that other stuff when you can so easily type just a few characters… 33 to be exact. 

All you have to do is make sure you’re in the server node of the provider tree. 

So if you connect through sqlps, then you’ll right-click on the server itself and that’ll take you to the Default instance.  Then you just need to go one level up by typing ‘cd ..’  From there, just type one of the lines above and you’re golden.

Oh yeah, his script also told you whether the box was even clustered to begin with.  We can handle that with another line.

>$a.IsClustered

That returns ‘True’ so if you want to make it pretty for yourself then just a quick change can make that happen.

>IF ($a.IsClustered) {“It’s clustered alright.”} ELSE {“No it isn’t.”}

And strictly speaking you don’t need the ELSE in there.

Now, using both methods you can easily cycle through a ton of boxes so there are no worries there.  

I’m gonna get on my soapbox for a minute and say that this is the bulk of the PS I see being taught and to me it shows a lack of fundamental understanding of what PS is supposed to do for us.  I just prefer to use what the provider gives me.  And it really matters too.  If you find yourself somewhere without your scripts, which happens to me all the time when I’m working from a user’s box, or at another shop, then you’ve got to remember all that SMO provider info.  My method is not only much easier to remember, it’s much easier to investigate because you can easily do a GM against $a to see all the properties it has.  But if you go even just a couple weeks without typing in that code to load the SMO provider, you can forget nuances and you’ll find yourself looking something up.  And that’s not efficient DBA work.  PS is supposed to make our jobs easier, not complicate them.  PS isn’t just a replacement for VBScript.  It’s an entirely new way of thinking.  And there are tons of guys out there teaching PS who haven’t switched to the PS way of thinking.  To me this is the exact same thing as those guys who use those really complicated cursor methods for finding log space usage when they could just type DBCC sqlperf(logspace).  Can you do it the other way, sure, but why would you when the SQL team has given you such an easy method?  This is why most of the time whenever I see a PS blog somewhere, I typically have to translate it into real PS.

So guys, let’s get rid of all that needless SMO code and bring PS back to what it’s supposed to be… simple one-liners that make our job easier.

I also did a video last week about changing server-level properties that talks you through the same methods as well.  Take a look.

http://midnightdba.itbookworm.com/VidPages/PowershellServerProps/PowershellServerProps.aspx

Forget Sr. DBAs

One of the things I’ve blogged quite a bit about is what makes a good Sr. DBA.  Well, forget that for now.  What I wanna talk about this time is what makes a good Jr. or Mid DBA.

Not that anyone really works their way up anymore, but I’m going to talk about it anyway.  In my estimation, a good Jr. basically shuts up and does what he’s told.  I know that sounds harsh, but at this stage your job is to learn and do the grunt work nobody else wants to do.  This is how you learn the basics.  Don’t expect us to include you in on intricate HA/DR discussions, or in advanced security meetings, but also don’t expect us to give users readonly access to the DB either.  It’s not like we’re really above it, but we know how to do that stuff.  I remember when I started, I was definitely a Jr. and I loaded flatfiles with BCP almost exclusively for about a year.  I learned all about how to check files for dupes, and about PK violations, and about non-logged ops, etc.  The funny thing is, I was never bored with it… not even for a second.  I always found it interesting the different ways people found to mess up flatfiles, and controlling the logging, and back then the files didn’t autogrow so I learned all about file size management, and all kinds of things.  So if you pick the right beginning, it can teach you all kinds of things you need to know.  And if you’re in the hands of someone who knows how to guide you then you’ll definitely learn what you need to know.  And I”m not meaning to come off like a conceited jerk.  If you’re a true Jr. then you’ve got a lot to learn, and free thought isn’t what’s required.  You’re supposed to learn.  Keep your mouth closed and your ears open and practice-practice-practice.

Now as a mid, you’re kind of in between so you’ve got a foot in both camps.  You’ve also got a lot of learning to do, but at the same time I expect you to be coming to the table with more ideas.  Your job is to expand your horizons and step out and learn a lot on your own, and not only suggest things for you shop, but maybe even surprise me with a couple mock-ups for pet projects you’ve taken on to make something better.  That project can be anything from a management website, so a set of SSRS reports for something we need, or maybe put a cube around some of the metrics we’re recording, or something.  My point is that your job is to really start learning how to think like someone who wants to lead the shop one day and you’ll never do that unless you practice it.  Being a leader doesn’t come over night and neither does coming up with solutions.  And I did say leader, cause there’s a big difference between a manager and a leader.  You don’t care much about being a manager, but you do want to be a leader.  And to be a leader, people have to follow you, and nobody will ever follow you if you don’t know what you’re doing.  And once again, if you have a good Sr. he’ll know how to take you to that next level.  At this point it’s all about being challenged.  You’re learning how to lead people, and how to lead an entire company into their database future.

There’s one more thing I wanna throw out there for you… Winners want the ball.  So when the boss is handing out projects, raise your hand… esp for the ones that are slightly above you.  Get out of your comfort zone and force yourself to learn something under fire.  Sure, go to the other guys for help if you get stuck.  That’s what they’re there for.  But try something on your own that you haven’t done before.  It’s the only way you’ll really learn.

Tempdb Contention

I had a nice production problem today that slowed everything down drastically.  I’ll spare you the details of the user processes, but when looking in sys.sysprocesses, I noticed that the waitresource was ‘2:%’.  I also correlated this with the wait_type column in dm_os_waiting_tasks and saw a lot of PAGELATCH_UP types. So the first thing I did was pull up the page# in dbcc page, and noticed it was page type 11. 

In my case, here’s what I typed:

DBCC traceon(3604)

DBCC Page(2, 1, 186204, 3)

 And I might add that there were a lot of them backed up.  I had something like 30 blocked processes and they were all waiting on this same page in tempdb.  Page type 11 is a PFS page so this meant I was having contention in tempdb. 

And since I always like the low-hanging fruit, I chose to add more files instead of using -T1118. 

So I added 6 files to the 16 that were already there and the problem cleared up almost instantly.

You don’t have to use DBCC Page though.  As it turns out, I was just surfing around afterwards to see what was out there on this issue , and I found a great blog by MCM Robert Davis that has a lovely query that’ll tell you right away whether you have tempdb contention.  I was gonna paste the query in here, but go read it for yourself.

A fun conversation about backups

While I was talking to one of my Jrs today about backups, a .Net guy poked his head around the corner to offer his opinion on the matter.  The subject was basically whether everything will be copied over if you do a full backup and restore it to another system.

Here’s basically how the talk went:

.NET – Well, it really depends on whether you have different filegroups as to whether everything will be restored.

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  Well, yeah, but I’m just saying that DBs can have a lot of filegroups sometimes and if it does, then you might not get all o fthem.

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  But…

DBA:  There are no buts… if it’s a full backup and it restores, everything is there.

.NET:  I’m just saying that…

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  You can’t deny that there are several filegroups, right?

DBA:  I would never try to deny that.

.NET:  And if you backup those different filegroups, then you can only restore some of them, therefore you can have a DB with some filegroups unrestored.

DBA:  This is correct.

.NET:  So back to my original…

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  No, because you just said that you can backup different filegroups and restore only part of them.

DBA:  Yes I did.

.NET:  So aren’t we saying the same thing?

DBA:  No, not even close.

.NET:  Why not?

DBA:  Because we were talking about full backups, not filegroup backups.  Full backups backup everything… it’s in the name.  Filegroup backups only backup filegroups… that’s also in the name.  But a full backup can only be restored fully.  There is no partial restore of a full backup.

.NET:  So you’re telling me that you backup the full DB and you can backup filegroups, but it’s a different kind of backup?

DBA:  Yes.

.NET:  But still, if you have…

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  You’re not letting me finish.

DBA:  Yeah, because I know what you’re trying to say and there’s no wiggle-room.

.NET:  But if you have multiple filegroups, then…

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  So are you telling me that there’s absolutely no way to restore only certain filegroups from a full backup?

DBA:  That’s exaxtly what I’m telling you.  Again, it’s in the name ‘Full Backup’.

.NET:  So is there a way to change the location of a filegroup when you restore the full backup?

DBA:  Only if it’s tied to a specific file and you restore that file to another location, then you’re really moving the file itself and the filegroup is coming along for the ride.

.NET:  But I keep thinking there’s got to be a way to…

DBA:  No, if it’s a full backup and it restores, everything is there.

.NET:  But if there’s not enough disk space and it only restores part of the DB then that would leave you with only part of the DB.

DBA:  No, it wouldn’t complete the restore and you’d have nothing.

.NET:  So if you take a filegroup backup then you can restore different filegroups.

DBA:  Yes.

.NET:  Then I’m still right.

DBA:  No, not even close.

.NET:  Yeah, because you can still do what I said you could do.

DBA:  No, because you’ll recall that there are 2 very important aspects of what I’m saying and they both have to be there… which is part of the original topic:  “If it’s a full backup” and “if it restores”.  If both of those exist, then you’ve got everything in the DB.

.NET:  But I still think you should be able to…

DBA:  No, if it’s a full backup and it restores, everything is there.

And then it just kind of tapered off into other backup discussions from there, but that was just fun.

Thought you guys might like a funny story on a mon.

What makes a Sr. DBA?

I get indirectly asked this question all the time… what makes a Sr. DBA?  Well, that question is gonna be answered 20 different ways by 10 different DBAs.  But here’s my answer.

Aside from whatever specific qualities you assign to what a Sr should look like, here’s a stick you can use to measure yourself.  You should be able to meet both of these criteria.

  1. 1.       Are you right most of the time?
  2. 2.      Do you handle your own problems instead of calling daddy?

 

Ok, let’s talk about #1 first.  Nobody is right all the time, but are you right most of the time?  When you get on in a crisis, do you diagnose the problem correctly say better than 90% of the time?  And do you discover and deduce the problem, or do you just fall into it?  Do your users come to you for answers?  In other words, have you earned your place as the go-to guy?

Next, how often do you call daddy?  If you’re in a shop with an existing Sr. DBA, do you call him when you have a problem or do you research and solve your own issues before getting him involved?  It’s always really easy to call the Lead DBA, but it doesn’t teach you anything.  And as long as you’re relying on his research skills you’ll never be the go-to guy yourself.

I remember it well.  Much longer ago than I care to remember, I asked whatever male figure I had how you know when you’re a man.  He told me something that stuck with me all these years.  He said, you know you’re a man when you quit calling your parents when you have trouble.  And I remember it hit me once when I was driving late at night and got a flat tire.  I just got out and changed it and went on my way.  And a year ago I would have called my folks to come help me.  That was my first hint that I may have crossed into manhood.  Because at some point you realize that it’s up to you.

It’s the same in the IT world.  You go through these years of learning and then of getting proficient, and at some point it dawns on you that it’s all up to you and only you can solve your problems.  You have to be the one to investigate and solve the blocking, or the deadlocks, or the excessive waits, etc.

And that doesn’t mean that you never need any help with anything.  Nothing could be further from the truth, but how often do you need that external help?

Am I a Dinosaur without the Cloud?

Man, you know, I keep looking at all this cloud advertising MS is putting out there, and all the propaganda they’re spinning around Azure and there’s one thing that’s screaming loud and clear.  Those who don’t jump on the cloud wagon are going to get left behind.  I get this message mainly from those DBAs and evangelists who have bought into the whole bag because when people buy into something so wholly.

But frankly I’m tired of being made to feel like I’m going to be a dinosaur or get left completely behind if I don’t buy into Azure.  I’m an enterprise DBA, and Azure is very new and as far as I’m concerned it’s just a way for MS to make sure they get paid for all their license fees.  It’s much harder to pirate software when you host it yourself.  But here’s something… and you’re going to find this shocking… just because MS had an idea and decided to market it doesn’t mean that the rest of us are going to fall behind if we continue doing what we’re doing.  It’s not like I’m refusing to learn any of the new features or to expand my understanding of current ones.  I study SQL quite a bit throughout the week and I work really hard to stay current.  And I think I can survive for quite some time without moving my company to the cloud.  Not only will be be a long time before it can do as much as I can with my own install, but there are still tons of privacy and regulatory issues to work out. 

All of this is to try to make it sound like locally installed DBs are going the way of the mainframe and nothing can be further from the truth.  There will be DBAs at companies for decades to come and if the time ever comes when I absolutely *have* to make the jump to Azure, then I’ll jump off that bridge when I come to it.  Until then, the only word I have for you true believers is this… don’t believe everything you hear from MS marketing.  And just because they came up with something you think is cool, don’t think it’s going to be the only game in town. 

Besides, the cloud is mostly a marketing joke anyway.  The other word for the cloud is the internet.  And there have been hosted services on the internet for a long time now.  This is nothing new.  So when you see these commercials talking about taking advantage of the cloud, just remember that it’s just the internet.  I’ve been buying books in the cloud for years.  And I’ve had my website hosted in the cloud for years too.  And remember that new thing they came out with a couple months ago?  Apparently now they can host your email for you on special websites so you can get to it anywhere.  Wow, imagine that… email in the cloud.  What will they think of next?