Tag Archives: Coding Standards

Who’s afraid of cursors?

Ok, so one of my last posts talked about using a cursor for something and I came upon some criticism and/or some wincing about the fact that I used that horrible word.  However, cursors like anything have gotten a bad rap (xp_cmdshell is another).  People, there’s nothing wrong with using cursors as long as it’s what’s actually needed.  The problem comes with frontend coders writing cursors for everything and severely abusing this feature. 

But there’s nothing inherently wrong with them… esp for admin tasks.  There are so many legitimate uses for cursors I can’t even come close to naming them all, but here are a few…

Doing something to every DB on a server.

Doing something to every schema in a DB.

Doing something to every table in a schema.

Doing something to every user/login acct (like running changeUsersLogin to correct a ghosted user acct issue).

These are all perfectly legitimate uses of a cursor, whether it’s T-SQL or powershell.  So let’s get ove rthis hangup we’ve got against both the concept and the word.  There’s nothing wrong with it as long as you do it right.  Afterall, what is a cursor anyway?  It’s a way to store a list of values so you can perform an action on each one of them.  And the alternative is to hardcode each one of those lines yourself or use the GUI for everything and we all know that’s completely unacceptable. 

And in powershell everything you do is a cursor… even things that don’t need them.  Of course, they call it a foreach loop, but that’s just a more acceptable name for a cursor isn’t it?  It’s more specific… foreach one of these things, I want to do this…  hell, even almost every backup routine I’ve seen in the shops I’ve been in, and on the web cursor through the DBs on a server and back them up.

So deciding to not use cursors (or at least wince at them) because they get misused by some is like refusing to use a hammer on a nail because of someone you saw trying to open a banana with one.  It wasn’t the right tool for the job so it messed things up.  But it IS the right tool for a nail.

SSIS Email Chain

Today’s blog is actually an email chain between me and a user. It’s only a single question and reply, but I think it’s good info.

Q:
Hey, would you agree that a monthly load process is better served as an SSIS – even if you have to push/pull from text files for now – than as a series of SPs or DLLs?

A:
if you’re staying on the same box for the load then SPs can be an attractive offer because they’re very fast and the memory stays in sql and can be managed quite well… if you move that process to ssis, and ssis is on the same box, then you have to allocate memory away from sql to run the pkg and house the data while in the buffer…

if ssis is on another box, but the data is still being moved to different dbs on the same box… so if the data is being moved from server1.db1 to server1.db2 and ssis is on server2, then you don’t have to fight sql for memory, but now you incur the network cost of moving the data from the box, and then back to it…

if you’re moving between boxes, then y, ssis is a better choice because in SPs you have to manage linked servers or openrowset to make that happen and that’s not cricket…

however, what makes ssis attractive in the single box scenario is that it handles errors easier and alerting is much richer and easier to come by… you can also more easily fix problems in the data itself and it’s easier to extend… so if your requirements change and you need to switch it to another box, or if you need to send a copy somewhere else, etc then that’s much easier in ssis… ssis also gives you parallelism that you cant do in sps… you can load several tables at once in ssis where they have to be serialized in sps…

a good compromise in the single box scenario is to keep things moving like they are if possible, and where not stupid… so if they’ve already got an sp that inserts data into the 2nd db, then just call that sp from ssis… this way you get the insert and select process in sql where it belongs, and the workflow and error handling of ssis, and everything else that goes with it… if a single sp inserts several tables serially though, i’d create several sps and call them individually in ssis tasks… even if you have to keep them seialized (like for PK/FK issues)… because now you can even add things like checkpoints and individual error flows to your pkg steps and have a richer experience…

these things are meant to allow your process to grow and change much easier than is possible with an sp… so if one of the DBs gets too big and has to move to another box, it’s a simple matter in ssis… but in an sp, that means you have to create and manage a linked server, which is just one more thing and they come with their own problems…

as for the DLLs… these processes NEVER belong in a DLL… not only do you have to steal memory from sql, but it’s not optimized for data flow tasks… it’s just a plain DB connection… you can have parallelism though by calling the windows threading model, but they have to manage those threads manually and mistakes can be made… and that takes extra memory, etc… not only that, but the code isn’t available for admins to fix problems, and making changes to the compiled code can be dangerous, right… so no, that’s not a good idea…

does this answer… i realize you prob wanted more of a yes, do this, but it’s not that simple…

Valid Opinions

A friend of mine was in a meeting yesterday and he called me right after because he was upset. Apparently he and someone else had conflicting opinions about how to do something that neither one of them had ever done before… so they were both essentially guessing.

He was mostly upset because he said the other guy’s opinion was just as valid as his and so far it’s brought the group to a standstill while they figure this out.

I told him that there’s no way that both opinions are equally valid. In some instances that may be the case, but in this one, I happen to know who the other guy is and I know that when it comes to SQL brains he definitely got the short end of the stick. So I’m saying it out-right: not all opinions are equally valid. My mother-in-law likes to put her 2-cents in when we’re sitting around talking computers. She doesn’t know the 1st thing about it, but she likes to give her opinion anyway. So when I ask Jen’s brother something, his answer is worth more to me than her mother’s. The same holds true here right? Because someone is in a group and was invited to a meeting that doesn’t mean that they have equal say in things.

Here’s another really good example… let’s say that there’s a weird recovery scenario or even corruption scenario that you come across. And you come up to me and Paul Tripp at a conference and ask us what our opinions are. I’d say that Paul’s opinion in this case is worth far more than mine. Sure, I’m a qualified DBA and I’ve been around the block more than a few times, but Paul WROTE CheckDB so I think he carries a little more weight than I do. Even if it’s something neither of us has heard of before, I’d still take his guess over mine.

So no, I’m not one of those who believes that everyone’s say is as equally important as everyone else’s. Hell, I don’t even believe that everyone should even have a say. Some people are just not qualified to give opinions on a topic. In a restaurant you don’t see the dishwasher getting a say in what specials go on the board, and in a dojo you don’t see white belts chiming in on black belt promotions. So why is it that everyone in IT thinks they deserve an equal say just because they were invited to the meeting?

More Questionable Code

A couple pieces of code came across my desk yesterday and they’re definitely the kind of thing you read about… if nowhere else, then here for sure.  This was code that a dev sent me that is to be deployed next week.

The first one could actually cause a problem so I’ll talk about it first.

I’ll write the pseudo-code because I don’t have the actual code in front of me right now.  It goes like this:

1.  Check if the Whale schema exists.

2.  If it does exist then drop it.

3.  Create the Whale schema.

Ok, so that’s exactly what the script does. Here, I’ll go ahead and attempt the code.  I didn’t look it up so it may have a slight error in it, but you get the idea.

If exists (select name from sys.schemas where name = ‘Whale’)

BEGIN

drop schema Whale

END;

Create schema Whale with authorization = ‘dbo’;

So what’s wrong with this code you ask?  It’s simple, here’s the logic.

First, if it exists and you drop it then why just turn around and recreate it in the next step?  What’s the point of that?  So if your goal is to create the schema then everything above the ‘create’ line is useless.  And I know what you’re thinking… so what?  What’s the big deal if a tiny query gets run during a deployment?  It’s not like it’s going to drag the system down.  Well, you’re right about that, but it could kill the deployment.  If that schema did exist and it actually contained objects, then the drop statement would fail until you put those objects somewhere else.  So with there being no code to check that objects exist inside, you could be dooming the deployment to an unnecessary error.  You could also say that you know that the schema doesn’t exist so there’s nothing to worry about.  That’s fine, then take out the check and the drop.  If you know it’s not there, then take it out.  It’s called having concise code and it’s something that we lack in this industry these days.  Let me illustrate this with something completely ridiculous that also doesn’t really effect performance.

Create table #t1 (col1 int)

truncate table #t1

truncate table #t1

truncate table #t1

truncate table #t1

truncate table #t1

Insert #t1 Select 1

Ok, so look at that code above.  I created a temp table and then truncated it 5x.  That’s not going to effect performance at all because there’s no data in it since it was just created and there are no pages.  Then I go ahead and insert a row.  I can’t think of anyone who would let this kind of thing go on in an SP, or in deployment code, but we’re expected to let useless checks stay in our scripts. 

This code was clearly scripted in the wizard so it’s not like the dev went out of his way to do this by hand, but it’s the mark of a fairly inexperience coder to let the wizard create scripts and not inspect what it’s giving you.

The other piece of code doesn’t really do any damage as much as it’s just annoying.  In fact, it’s really 2 different scripts.  They’re create view scripts and the first one reads something like this:

create view MyView

as

Select MyFavoriteTable.col1,

MyFavoriteTable.col2,

MyFavoriteTable.col3,

MyFavoriteTable.col4,

MyFavoriteTable.col5,

MyFavoriteTable.col6,

MyFavoriteTable.col7,

MyFavoriteTable.col8,

MyFavoriteTable.col9,

MyFavoriteTable.col10

from MyFavoriteTable as MyFavoriteTable

Ok, so even typing this in the blog pisses me off.  And again, it’s just a style thing, but this just drives me crazy.  What drives me crazy the most is that this dev doesn’t understand what aliases are for.  To alias a table with the same name as the table itself defeats the purpose of having the alias in the first place.  Here a much better alias would have been mft or mt or m.  Hell, you could even go as far as MyFT or something like that.  Here, I’ll play one of those out for you and you tell me which one you’d rather read.

Select mft.col1,

mft.col2,

mft.col3,

mft.col4,

mft.col5,

mft.col6,

mft.col7,

mft.col8,

mft.col9,

mft.col10

from MyFavoriteTable as mft

Forget reading, which one of those would you rather type?  It’s just more concise and easier to read.  I happen to know the dev who wrote this and his opinion is that the full table name makes it easier to read when you’ve got larger joins.  Personally, I’ve never known anyone to complain about short aliases before, and again, I honestly think this boils down to inexperience.  When I first started coding well over a decade ago, I used to need things to be presented to me in very specific ways too.  It was the only way I could read the code.  That kind of thing is much less important to me now that I have been doing it for a long time.  Why?  Because I know how to code.

So the second thing about this script that bugs me is the fact that he saw the need to alias a query with a single table in it.  Now you’re just being obstinate for no reason.  I know the argument though.  He probably would say that he did it for consistency.  The only problem with that is the other create view script he submitted didn’t have the same stupid aliasing structure, so where’s the argument now?

Ok, so this code was clearly brought to us as part of a code review so the next logical question is why don’t we just reject the code if it bothers us so badly?  Well, the answer is simple.  Like so many places, our code reviews are merely perfunctory process placeholders.  Our lead dev has made it very clear that he’s going to do whatever he likes no matter what we say.  We’ve rejected code plenty of times for really valid process or performance reasons and he always thumbs his nose up at us and pushes it into prod anyway.  I really don’t understand how he’s gotten such a superior attitude towards the DBAs, but it’s completely unwarranted.  He acts like we’re in this just to piss on his parade and make him look bad, but we’re really only trying to help.  But we don’t even bother anymore.  What’s the point?  He knows more than the rest of us put together so we don’t even bring stuff up anymore.

So that’s my rant for tonight.  Remember, use aliases wisely and be as consistent as you can.  And for everyone’s sake listen to what your DBAs are telling you for a change.  They’re really only trying to help.

The Clean House

This post was particularly inspired while cleaning my house today.  It’s easy, if you think you’ve got a clean house then just stand up on a ladder some time and change your perspective.  I think you’ll find there are lots of things about your clean house you don’t know.  And while nobody’s officially complaining about the condition of the house (and in fact, everyone thinks it’s in great shape), it clearly is in disarray when you shift to that new bird’s eye view.

The same is true for your DB.  I honestly can’t count the number of times I’ve been told that there was no reason to look at DB performance because nobody was complaining about it.  Then when I dig into it and look at things from a DBA angle, the system is actually abysmal.  So if performance is so bad then why isn’t anyone complaining?  Well there can be a couple reasons for that.  First of all the DB could just not be very busy comparatively so the users won’t notice the issues.  And second, they may have just gotten used to things running slower so again they don’t notice it.  I’ve actually seen that several times.  Things get slower and slower, or better yet, they just start out slow and nobody ever knows the difference.  And what’s even WORSE is that more often than not there’s some cornhole in the background telling them that Oracle would solve all their problems.  Listen, if you can’t write a simple process against mssql and make it run right do you really think you’ll be able to hit Oracle ok?

So I’ve gotten off topic a little… back on track.  The point of all this is that just because nobody’s complaining about performance that doesn’t mean that your system is performing.  I quite often find that companies have no idea what their systems can scale to and they just assume that they’ll be able to handle whatever they throw at them.  And it typically takes more forethought than most companies have to look at a system objectively and proactively fix problems.

So the next time you’re considering your system performance, stand on a ladder or crawl down on the floor and see how things look from there.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

The Zombie Standard

As I promised yesterday I’m gonna talk about process and coding standards today and I want you to really think about what I’m saying here because it’s what separates the men from the boys.

This topic won’t ramble on quite as much as the past 2 days have though so don’t worry.  Basically what I want to say is that you should definitely be creating coding standards and following them, but don’t rest there and don’t allow any of your standards to rest either.  What I mean by that is just because you come up with a standard, say using ‘insert into’ instead of ‘select into’ during your ETL loads, don’t always blindly follow that standard.  Every process is unique and every process deserves consideration.  You can adversely effect your servers if you do the same thing across the board without ever questioning whether this is the right tool for this particular job or not.  So where a ‘select into’ may be the perfect solution for one really huge snapshot load, an ‘insert into’ may be the right solution for other lesser loads. 

One argument I hear a lot when preaching this bit of DB wisdom is that you want to have a solid way of doing things and you want to standardize.  And frankly, that’s just ridiculous.  That argument doesn’t hold up to even the most basic scrutiny.  Ok, maybe ridiculous isn’t the word… let’s go with asinine instead.  So you’re telling me that for this big snapshot load you should use the exact same process as you do for your smaller ones because standardization is the key?  Then why aren’t all of your loads full snapshots?  Shouldn’t you be standardizing on a single load process?  And what about your indexes?  Do you have the exact same fill factor for every single index in your place?  If so you’re not paying attention to your index stats.  And are you using #table instead of @table absolutely everywhere?  Or are there cases where one is better than the other? 

So you see, with just a few examples I’ve shown you that you don’t do EVERYTHING the same way in your DBs.  So there’s no reason for you to scream standardization with something as important as a data load.  Take each load on an individual basis and decide what the best course of action is for each one.  And yeah, as the data grows or changes in nature you may end up revisiting your process and updating it to reflect the new nature of your objects.  And that’s ok.  So many people think that having to rewrite a process is a huge sin and that you’ve failed somehow.  It’s really not you know.  Not only are these types of things a learning process, but things also change that require you to change tactics. 

I really hope you take what I’m saying here and apply it.  And don’t be one of those idiots who tells me that he runs everything serially because he wrote the process on a single-CPU box and therefore he needs to standardize on that platform instead of taking advantage of a multi-CPU machine.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

The Performance Dilemma

OK, so yesterday I was talking about how servers can take a long time to finally be brought to their knees with poor processes, and that you want to get to the point where your server is on a strict diet and you control every morsel that goes into it.

Today I want to talk about how that can play into a real performance strategy because unfortunately you have more to consider than just raw performance.  I know, I know, it sounded yesterday like I was advocating that performance was the most important thing.  And it is… but at the same time it isn’t.  So let me explain a little.

Performance is extremely important and some might even say it’s the most important thing on the server, but you also have to consider management et al.

See, quite often the code that leeches out the absolute best performance on the server isn’t as easy to manage as other solutions.  Something like this isn’t very concrete without an example so I’ll give you one.

Let’s look at what I call moving schema.  The guys in my shop are sick to death of hearing that term but it’s an important one.  Moving schema is where you drop and recreate objects all the time that should be permanent.  A good example of this is doing ‘select into’ instead of ‘insert into’.  Sure, the ‘select into’ performs better than the ‘insert into’, but it’s also harder to manage from a couple different perspectives.  For starters, even during your load you can’t specify a filegroup for the object so you’re stuck with the default.  Now for performance or growth reasons you may find it better in general if you put it on a different set of disks, but you can’t because ‘select into’ doesn’t allow it.  So from a space management aspect ‘select into’ traps you in a corner.  Also, if you have specific permissions granted on the table you have to keep track of them somewhere and reapply them every day when you recreate the table.  And you also have to remember to update your permission repository every time you change the perms on that table during the course of your other admin functions.  As well, most performance stats are gathered by objectID.  And if you’re recreating the object every day then your objectIDs are invalid tomorrow.  So you either have to keep track of what the objectID is every day so you can track performance and other issues (like fragmentation) from day to day, or you have to store the objectName in your repository as well which leads to an inefficient and larger schema to track production issues.  It also makes the timing of other processes more critical because with ‘insert into’ they’ll be blocked until the current load finishes, but with ‘select into’ they’ll fail because the object doesn’t even exist. 

So ok, I’ve shown you a few reasons why something as simple as moving schema can be a problem.  And like I said, it’s really one of my diatribes because of those issues mainly, and a couple out-lying ones.  And the guys at my shop have been hearing this battle cry for about 3yrs now and I’m sure they’re quite tired of it.  But when it comes right down to it, it’s good DB practice. 

There are some instances where the moving schema rule can be broken though.  Let’s look at a really large data load for instance.  If for some reason you can’t do any kind of incremental load and you have to do a full snapshot every night (those instances are becoming fewer and fewer with Katmai) on a HUGE table (several hundred million rows), then you may be better off doing a ‘select into’ because you don’t incur the logging and it can it can decrease your load time significantly.  It of course can also help control how big your log grows.  So this is a decision you have to make at the time, right? 

However, I will say that quite often, if you take the size of the log out of the picture, the ‘select into’ doesn’t out-perform the ‘insert into’ by all that much.  I recently tested the 2 methods against 30mill rows and the ‘insert into’ only took about 40secs longer.  That’s really not enough to worry about in the course of a full ETL load.  And for all the extra benefits you get from having a stable object, that 40secs is more than worth it.  So this would be a case where choosing the absolutely best performing process wouldn’t be the best way to go.  In the case above where I talked about loading several hundred million rows, the ‘insert into’ may increase the load time by as many as 30mins and that’s hard to ignore.  So depending on your requirements you may decide that dropping the object every day is worth it.  But in the instance where the difference is only a handful of minutes, you’d typically choose to keep your schema in tact and gain the other benefits.

So OK, you’ve got a process and you’ve decided to drop the schema every day to do your load.  And your process performs as well as it possibly could because of the lack of logging.  The problem is that people tend to think of performance in terms of the single piece they’re writing and nothing else.  Performance includes more than just the load.  Performance also includes up-time, and number of errors, and manageability, etc.  The load is just one part of the process for any given object.  Examples of different parts of an object’s process would be the load process, the reporting process, the troubleshooting process, the triage process, security process, the space management process, etc.  I’m sure there are others, but you guys get the point.  So all of these are all part of the process of managing this object and if they all don’t play well then the process for that object doesn’t work well.  So be careful in throwing all the other processes away and making them extremely more complicated just to appease a single piece.

So I’ve been in several shops and it’s always different everywhere.  Some shops are really big into management and others are really big into performance.  I’ve been in shops where the management of the objects was just a pain because the systems really were so busy that every last CPU cycle was counted and every last I/O had to be justified.  So they were willing to put up with the management headache of that kind of performance cost because they needed it to stay alive.  And again you have to decide what’s best for your place and your processes.  But whatever you decide, don’t rest on that decision.  Keep making it again and again.  I’ll talk about that tomorrow.

I guess that’s all I’ve got to say for now.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

Diabetic Servers

One question I get a lot is about performance and how systems can run well for years and then suddenly just stop performing well.  That’s an understandable question and one that’s both easy and complicated to answer.

The short answer is that there’s a huge margin for error in these types of things.  However, the margin is only huge if you’re not doing anything about it.  Let me explain.

It’s like being on a diet.  When you’re watching what you eat every little bit matters.  That extra helping of fries, that bowl of ice cream, and that soda are all death to a strict diet.  Even little things can be harmful when you’re on a diet and the more strict the diet the more the little things matter.  That’s why professional athletes of all kinds watch their intake like hawks.  So in their case that extra ounce of potatoes, or that extra ounce of meat can really make a difference.  And that’s not even to mention diabetics and other people on strict medical diets.  Think about someone with severely high blood pressure.  Their diet is extremely important and the slightest wrong food can have serious blowback on their system.

Now look at someone who’s already grossly overweight.  This guy eats whatever he likes up to thousands of extra calories a day.  He eats only fried and fatty foods and eats as much of it as he likes.  So that extra helping of ice cream or that extra few fries really doesn’t matter much on top of everything else.  That’s not to say that it doesn’t have a cumulative effect, just that day to day it doesn’t matter much.  Eventually though, it will take its toll as he get heavier and heavier and starts to feel health effects from it.  So while those extra fries do eventually catch up with him, they don’t cause any real immediate effect on top of all the other stuff he’s eating.

Well, that’s much the way it is with servers too.  If you have a bunch of stuff that runs kinda poorly or just not as well as it could, it’s not really that important on a daily basis because the server itself runs slow and what’s one more mediocre process going to hurt?  So a server can run for quite a while like that and nobody will ever really notice the difference.  Part of the problem is that so few people bother to investigate better ways to do things so they get used to their DB performing slowly.  It’s not necessarily their fault and these things can sneak up on them.  Even a fairly good DBA can have wrong decisions go undiagnosed for a long time and the poor performance can sneak up on him and next thing he knows his system is just dragging.  And it’s hard to go back and find that one thing that started the whole thing.  I find typically that performance problems are systemic.  By that I mean that whatever mistake is made, is made throughout the whole system.  It’s quite often not an isolated incident unless someone new comes into a shop where things are already running smoothly.

So anyway, a server can put up with a good deal of abuse before it goes belly-up, but it will eventually happen.  What you want to try to get to is a point where you can treat your server like it’s got diabetes.  You want it on a very strict code diet where you watch every single I/O, every single MB going into RAM, every CPU cycle, etc.  On servers like this, one single process that doesn’t behave can have a noticeable effect on many other processes and you’ll know you did something wrong right away.  But if you’ve got a system that’s slow to begin with, who’s really going to notice if it’s running a little slower or if the CPU is hitting 81% instead of 76%?

This is why I’m not only stressing performance considerations on servers, but also why I’m always answering this question.

This of course has to play hand in hand with tomorrow’s post on reasonable performance.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

Get Away from the GUI

I had a comment on one of my last posts that suggested that I talk about a few things and this is the first thing that struck my interest because it’s a topic that’s close to my heart.  And of course, I’m talking about getting away from the GUI and learning to do things at the command line.

And I know that SQL Server was built around the cmdline and a lot of you cling to it for all of your daily tasks, but you’ve just got to get past it and start doing things in the GUI.  No wait, reverse that.

So why do I want you to get off the GUI?  Well for starters you never really know a system until you code in it.  You can go your entire life not knowing what goes on under the covers if you’re just clicking around and now you’re never learning anything.  So when problems come up you have no idea how to fix them because you’ve never bothered learning how anything works.

OK, I could sit here and write everything out in anecdotes but since time’s of the essence and my boys are going crazy this morning, I’ll just put it in a list with comments.

1.  GUI is limited – quite often the GUI won’t do everything you need it to do.  This comes up quite often working with security, but also things like CDC and data compression.  The GUI just doesn’t have all the features the cmdline does.  Backup/restore is another good example because you can’t do page-level restores in the GUI.

2.  GUI isn’t scalable – You can’t save GUI actions and reproduce them for several boxes.  If you code a solution you can code in parameters that allow you to run the same code against different objects.  I think user accts are a good example.  What if you have to add a user to all 200 DBs on your server, or on 10 servers?  What are you gonna do?  Click through those perms on all 2000 DBs?  It’ll take you days.  Meanwhile at the cmdline you can run it against all boxes at once and be done in just a couple mins.  Seriously, you can’t beat that.  So everything you do at the GUI you have to reproduce every time you need to perform that task.

3.  Gui provides no knowledge – Like I said already, if you always use the GUI then you never learn much because you don’t ever have any exposure to the inner workings.

4.  GUI is single-task based – Everything you do in the GUI is a single action.  You can’t do too multiple tasks as once and you can’t set them up in transactions so they all pass or fail together.  What if you needed to add a user to SQL and to Windows?  You can do it in powershell at the same time, but in the GUI you have to physically click through and do them yourself.  And what if you wanted to make it so that if one of them failed they both did?  You can’t do that in the GUI.

5.  GUI is typically longer – Even though the GUI is more friendly sometimes, it often takes longer to click through the screens than it does to type the cmds… at least for those of us who can type.

6.  GUI is slower – Many times the GUI has been slower to accomplish a task.  I’ve had the GUI take quite a bit of time or even freeze up on me while trying to do something when the cmdline finished in just a couple secs.  So the GUI isn’t nearly as efficient.

7.  GUIs don’t multitask well – While Yukon brought us the ability to have GUI tasks running in the background while we work in SSMS, you can still only do so many things that way.  You can multitask much better at the cmdline because you can offload it easier to other boxes or just schedule scripts to kick off.  Again, you can’t schedule a GUI operation.

Look… GUIs are fine when you’re a beginner.  They ease the burden of learning something new and when a new technology comes out I quite often start with the GUI just to get my feet wet.  But then I switch to the cmdline pretty quickly and do it in there the rest of the time. 

And there’s not much excuse for not learning the cmdline anymore since Yukon brought us the ability to generate the code for our actions and Katmai has improved on that.  Now if you do something in the GUI  you’re not sure how to do in the cmdline, you just script it from the GUI and learn something.  Now you can just modify it for your purposes.  I do that sometimes still.

I was really lucky when I started learning SQL in that the guy teaching me was a huge cmdline buff and insisted that I did everything that way.  Also, I emulated the MVPs who all touted the cmdline so I figured if it was good enough for them then I should be doing it too.  So in that respect I was lucky to learn that lesson early.

I hope some of you now learn that lesson here and start trying to do your jobs at the cmdline.  It’ll be tough at first, but in a yr you’ll be really glad you took the pains.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

Easier to Care

We’ve all had companies that didn’t listen to us.  They hire you with the intention of having you make things better and improving their processes.  Then when you come on and start making suggestions, they don’t want to implement any of them.  The reasons can be political, selfish, or just laziness, but the fact remains you’re not doing what you were told you would be.  And the company doesn’t seem to mind.

So what do you do now?  The company doesn’t seem to mind if things run the way they always have been so why should you?  It’s definitely easier to care about the quality of your job when the company does.  But why would any company choose to do things poorly especially when they’ve got to such great lengths to hire someone to fix them?  The answer is I just don’t know.  I’ve seen it too many times under too many companies and it confounds me every time.  I think a lot of time it’s an issue with a boss having too much faith in a guy who’s telling him you don’t know what you’re doing.  And even when you bring solid numbers to him he still doesn’t see the light.  That’s the one that gets me.  When I come to them with solid benchmark results and they still refuse to change the process because there’s a guy they’ve worked with longer who’s lying to them about how it’ll really behave in prod.

OK, so now what to do… well, you’ve really only got a couple choices.  You can quit or you can make your peace with it.  If you’re just not getting anything you need out of the gig then maybe it’s time to move on.  But if you’re getting something else out of it like a good schedule, or work from home, etc then it might be worth it to you to stick around.

If you do stick around then you need to make your peace with it.  And that’s the hard part.  Basically you have to stop caring about the systems and how they run.  Consider yourself more of an internal consultant.  They ask you questions and you advise them.  If they don’t take your advice then it’s not your problem.  It’s really as simple as that.  Of course, if you’re on the hook for support and this is gonna cause a bunch of extra work for you then that’s another story.  But you’ve gotta weigh that against everything else and make a decision.  If they’re really not gonna listen to anything you say then they really don’t understand DBs now do they?  Part of that is education, right?  You have to educate them about how to work with data.  Remember, this data explosion we’re in the middle of is still relatively new and many companies don’t have any idea how to work with it yet.  They have to be taught.  Now if only you can find a way to open them up to listening.

Just remember… numbers don’t lie.  If you keep showing them numbers eventually they may listen.  Of course, for some of these things you could always just do it yourself and say sorry later.  I’d never personally do anything like that(cough-cough), but I know some guys who have.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA