Tag Archives: blunders

My morning so far

Ok, aside from being kinda sick still, my morning has been filled with interesting issues.  Ahhh– the life of a prod DBA. 

It started today with space issues–again.  We’re just running out of space everywhere and it’s hard to even keep some DBs running because of the problems.  So I’m having to shrink some DBs just to keep them active. 

Now this one just gets me a little.  Had a vendor come up to me to ask for help.  He was on one of the prod servers and detached 3 of the DBs and couldn’t get them back.  Turns out he had moved the files and renamed them and expected SQL to know where they were.  He said he had problems initially and that’s why he did it, but he got stuck when he couldn’t detach them again to point SQL to the new files.  So I got that worked out for him with relatively little effort.  

Now this next one is just interesting.  I just switched our backup routine on our big system to backup to 20 files.  So the dev downstairs had a routine to restore it to a different server (I didn’t know that) and his routine was bombing.  He had re-written it to use the 20 files, but it was still failing.  Now, I’ll say that doing it the way he did doesn’t make him dumb.  In fact, I could very easily see anyone making a similar mistake because to someone who doesn’t know backup intimately, it seems like the kind of thing you should be able to do.  What he did was he was building a dynamic string to hold the file name values.  So in the string he was saying something like this: 

  

SET @BackupDatabaseFileNamePath = ‘DISK = N’ + ”” + ‘\\’ + @ProdIP + ‘\’ + LEFT(@BackupDatabaseFileNamePath,1) + ‘$’ + 

RIGHT(@BackupDatabaseFileNamePath,(LEN(@BackupDatabaseFileNamePath)-2)) + ””  

And so to that end, he was ending up with a string that looked like this: 

DISK = ‘\\127.0.0.1\K$\SQLBackups\ServerName\DBName\01of20FullPRD20101102010001.BAK’, 

And he did that 20 times, once for each file.  So now his actual restore command looked like this: 

  DECLARE @file VARCHAR(100) 

SET @file = ‘c:\SSISPkgMgmt.bak’ 

RESTORE DATABASE SSISPkgMgmt 

FROM @file 

WITH replace 

RESTORE DATABASE PRD 

FROM @BackupDatabaseFileNamePathInc 

And that looks like it should work because when you print it, you wind up with a perfect backup cmd.  The problem is that the backup cmd doesn’t work like that.  It takes params, or flags if you will, right?  And one of those flags is ‘DISK =’.  That means that the flag itself is ‘DISK =’, not a string that contains that same text.  It’s a subtle difference to us, but not to the backup cmd.  So if you want to build a string like that for either a backup or a restore, then you have to build a string that contains the entire backup cmd and not just a single part that includes the params. 

Here’s an example of something you can do though: 

DECLARE @file VARCHAR(100

SET @file = ‘c:\SSISPkgMgmt.bak’ 

RESTORE DATABASE SSISPkgMgmt 

FROM @file 

WITH replace 

And what he was trying to do was this: 

DECLARE @file VARCHAR(100)
SET @file = ‘DISK = ”c:\SSISPkgMgmt.bak”’
 
RESTORE DATABASE SSISPkgMgmt
FROM @file
WITH replace

 
If you run something like this you’ll see that SQL views it as a backup device because that follows the restore syntax.

So anyway, big fun again today too.

Even MVPs make mistakes

We’re in the middle of our last mock go-live before this weekend’s prod change-over.  We’re using an SRDF process to move the DB files from the current prod to the new prod.  This is our big SQL R2 cluster and what usually happens is that they present the drives to the cluster and then I run my script to bring the DBs back online.

Well what happened this time is that the drives weren’t added as SQL dependencies and therefore SQL couldn’t see them in order to bring the drives back online.  Well, I didn’t think to check that.  Instead what I did was I just deleted the DBs thinking that just starting from scratch would be the best way to go.  What ended up happening though is that SQL deleted the newly presented data files.  So the moral of this story is that even though SQL can’t see the drives to be able to attach the files, it can apparently see them well enough to delete them behind your back.  Thanks clustering team!

And so this isn’t entirely just a complaining session, here’s the query you can use to see which drives your clustered instance of SQL can see.

SELECT * FROM sys.dm_io_cluster_shared_drives

Now I’ve you’ve got drives in the cluster that SQL can’t see, all you have to do is add them as a dependency to the SQL service and you’ll be fine.  And in Windows 2008 you can do that while the service is online, but in 2003 and below you have to take SQL offline to add them.

Oh, and just for completion, you can tell which nodes are in your cluster by using this:

SELECT * FROM sys.dm_os_cluster_nodes

Here’s a quick update:  The errors I caused and fixed this morning had some residual problems.  Because once you make a mistake and try to fix it, you’ve already got mistakes on your mind and you’re not thinking clearly.  This is why I’m always advocating having restore scripts ready to go in a manner that you don’t have to think about it in a crisis.  You always want things to be as easy as possible.  And this is also why I’m always saying that you should have standardized troubleshooting scripts and everyone should know how to use them.  You don’t want individual DBAs inventing new methods on the fly.  You should all know what scripts are being run to troubleshoot if at all possible.

What does a bad query look like?

In my SQL Marklar blog today I discussed troubleshooting DB processes.  And I’m not going to re-hash all of it here but I did want to tell you about a use case that describes perfectly what I was talking about.

Not so long ago I got a call from one team and they told me that they had some server issues.  Everything moving slow they said.  Ok, so I got on and took a look and nothing was really jumping out at me.  Then I put a profiler trace on it to see if anything jumped out at me.  And of course, I knew nothing about the app or the processes so I really didn’t know what I was looking for, but you’ve gotta start somewhere huh?

So there I am in profiler and I’m just looking for long-running queries.  The problem is there were lots of queries I would consider long-running.  For some reason I focused in on a single SP that was taking like 5mins.  I pulled up the text of the SP and started looking through it.  It all seemed fairly standard.  I mean, it was long and everything wasn’t perfect, but there was nothing out of the ordinary. 

I contacted the app guy again and asked about it.  Does this SP typically take this long to run?  No, he says (and those of you who have seen My Cousin Vinny know where this is going).  So I thought eureka, I actually found something that may fix the issue.  So I got a couple valid params from him and ran the SP with them.  I also made sure to turn on execution plans and statistics io.  The query plan had some dings in it that experience has told me could easily have caused this kind of spike in resource usage.  The problem is that there was no fragmentation, and stats were up to date.  And in talking with the app guy he told me that they just archived a bunch of the data so it was down to like 200mill rows now.  So why would this thing be taking so long to return?  Moving on.

I found a copy of his QA system that had been copied over from prod the previous week and he assured me that they had changed nothing.  I could see the extra rows in the tables (copied before the archival), and the indexes were the same as in prod so that wasn’t the issue.  They had the same fill factor, everything.  In fact, everything I checked was identical except for the amount of data.  So why would having less data cause such a huge performance issue?  Moving on.

I decided that running this thing again and again on prod was probably a bad idea.  I’m just adding to the issue.  So I started doing the rest of my work on his QA box where I was the only spid.  And the hardware was similar as well (I love it when it works out that way).  So I ran the SP on this box and 5mins passed.  Then 10mins.  Then 15mins.  Then 20mins.  And sometime soon after that, the query returned.  I had collected all my stats along the way so I was golden.  It was getting the same execution plan as the prod version.  The results aren’t what I expected at all.  Why is the prod version now performing well in comparison?  So I called the app guy again and explained the situation.  Here’s more or less how the conversation went:

Me:  You know, I just don’t know what’s going on here.  I’ve been looking at this for a long time now and I’m getting further into a hole.  The prod version of this SP takes 5mins, and that’s even after the archival.  But when I run it on QA with all the data there it takes even longer.  If the prod query is acting up then I would expect the QA query to be a shorter time even with the extra data.

Guy:  Yeah that sounds about right to me.

Me:  What sounds right to you?  (I just got a bad feeling that something horrible had gone wrong)  (You know how you can instantly drop all the pieces into place and remember key words that make everything all of a sudden fit together?  Well, I just got that, but I wanted to hear him say it.)

Guy:  This SP usually takes about that much time, but since the archival it went down to 5mins.  We’ve been very pleased.

Me:  So you mean to tell me that when I came to you with this you didn’t find it necessary to tell me that the 5mins was an improvement?

Guy:  Well, I don’t know anything about DBs so I figured you could see that kinda thing already.

Me:  I see.  Well that clears up that mystery.  Now I’ve gotta go back and start over with this whole process.

Guy:  Well I can tell you the one that’s probably causing the issue.

Me:  Oh yeah?  How’s that?

Guy:  Because the slowness is only in our billing section and that’s controlled by just a few queries.  I can give you the names of the SPs and you can look at those.  There are only like 5 of them and since we’re having a hard time pulling up a list of clients it’s likely going to be the one that controls the search on that.

Me:  I see.  So you were just never going to tell me that?  I’ve been messing with this for 2hrs and you could have given me all this info before and saved me tons of time.

Guy:  Well, again, I don’t know anything about DBs and I figured you could see all that.

Me:  You thought I could see the web app from the DB?

Guy:  You can’t?

Me:  Kill me.

So ok, it turned out to be one of the 5 he gave me.  It had a bad query plan.  I wasn’t able to determine that all on my own, btw.  I had to recompile each one of them until I found the bad one.  And that’s because I didn’t have a perf baseline like I discussed on Marklar.

So there are a couple lessons to learn here but I think you can gleen them for yourself.  The whole point of this though is that making assumptions about processes is bad and no matter what looks like a long-running query to you, it may in fact be performing better than usual.

The mini heart attack

Tell me if this scenario isn’t all too familiar.  You get a call in the middle of the night saying that something’s wrong with a SQL box.  It won’t respond at all.  Now, you’re the first one on the scene because we all know that if there’s a SQL box anywhere within 100 mils of the app, they’ll call the DBA first because it has to be the DB.  So anyway, you have plenty of time to think while you’re on the way to the scene. 

That first step is the hardest.  You throw the covers off of you and expose your warm body to the cold night air. 

You swing your feet on the floor and try to find something that’s not gonna be too hot or too cold because you have no idea how long you’ll be up.  At this point you’re still grumbling about how wordy the helpdesk guy was when he called.  Why can’t these guys ever realize that we’re asleep and all we need is essential info?

Now you’re on your way down the hall and your thoughts turn to the problem at hand.  What could it be?  Did someone turn off the service?  Did CHECKDB take too long and is blocking everyone? 

You just hit your desk and start logging in.  While you’re waiting for desktop to come up it hits you… what are the state of the backups?  Am I sure that I can recover this box should something happen?  What if this server’s down for good?  Oh CRAP, is this the box I got the alerts about the failed log backups yesterday and just ignored them?  I’ll be lucky if the log just filled up and I can do something about it, but if the box is down and I lost all at that data because I didn’t feel like messing with it I’m in trouble. 

So you login and take a look at the server.  You’re able to TS in without any trouble and you breathe a small sigh of relief.  Well, at least the box itself isn’t down.  I can fix almost anything else that may be wrong.  You instantly look at your backup jobs to see if something has failed.  You see nothing out of the ordinary.

Next you look to see if anything’s blocking.  Nope, you see nothing of the kind.

You can query the DB and it’s online so now your heart attach is completely gone and you turn your thoughts to what could actually be wront.  So you call the person who submitted the ticket to begin with.  She reports that they couldn’t get in to the DB for a few mins but that all seems fine now.  You tie up some loose ends on the phone and end the call.

You then close the lid on your box and as you walk back to bed you make a promise to yourself that you’ll never ignore another log backup alert ever again. 

Don’t let this happen to you.  Laziness is no excuse for not having a backup and answering the alert is much easier than answering your boss’s call about why you lost half a day of data.  Don’t you have alerts setup?  Then why did this happen?  That’s a tough question to answer. 

The point of all this is that things can go wrong on a moment’s notice.  Restoring your DB is like a car wreck.  It can happen at any time and you won’t have any notice.  So if you’re not sure… and I mean SURE about the state of your backups, then get that way.  Do whatever it takes to make sure you’re covered and you know it.  And that includes doing the actual restore.  Do you know exactly what it’ll take to restore your server?  The last thing you want to do is to have to figure out what has to be done when you’re under the gun.  So unless you know exactly what it’ll take to restore something then you don’t have a solid plan.  So avoid the bread lines and these mini heart attacks and do your diligence.  The life you save may be your own.

The stupid have fallen

This is a followup to my previous post here.

Wow, what a day.  Ok, so I just got off the phone with the tech lead for that company.  It was enlightening to say the least.  It was really a longer call than I expected or that I even wanted, but my blood pressure’s gone down a bit now.  I obviously can’t recount the entire conversation for you, but here are the highlights:

V = Vendor

M = Me

V: Hi Sean, this is Josh(not his real name) with X and I was told you wanted to talk to me.

M: No, the last thing in the world I wanna do is talk to you, but I feel in order to get our issue resolved I have no choice.

V: Yeah, Cornhole(not his real name) told me you got upset with him when he was just trying to explain to you what your issue was and then you insisted on speaking with me.

M: This is just incredible.  No I didn’t get upset with him for explaining the issue to me.  I got upset with him when I told him his explanation was flatout wrong and then he called me an idiot.  He didn’t even hint at it.  He called me an idiot outright.

V: Really?  Cause that’s not the story I got at all.  I heard that he was calmly explaining that your backups were the issue and you just exploded on him.

M: No, that’s NOT what happened at all.  That little turd actually called me stupid for not buying his assinine explanation for our issue and then I only exploded after he did that a couple times.

V:  Well what don’t you believe about it then?  Because we’ve seen that many times where backups will cause proceses to block.

M: No you haven’t.  Listen, I realize that everyone can’t spend their days studying SQL, but one simple fact is that backups don’t block processes.  Period.  I’ve been doing nothing but DBs in SQL Server for over 15yrs against TBs of data and nowhere have I ever seen a backup block a process.  The problem is that you’ve got like 3 tables with TABLOCK in that sp and that’s what causing your blocking.  Don’t you think it’s funny that it only clears up AFTER you kill that process?

V: Where did you get the code?  Those sps are encrypted.

M: Encryption only stops amateurs.  And while we’re at it, what’s this script I hear about that you’ve got to unconfuse SQL to give you the real spid for the process?

V: Where did you hear that backups don’t block processes?  It’s well-known here that it’s one of our biggest problems.

M: No, your biggest problem is your TABLOCKS.  And I heard it from the truth. 

V: What is a tablock?

M: A table-level lock.  You’re locking the entire table just to read data.

V: I don’t know about that, I’ve never seen the code because it’s encrypted.

M: Well I do and that’s your problem.  Now what about this script I kept hearing about from Cornhole?

M: Did you get my email?

V: Yeah.

M: And you can see the place in the code where you’re locking the whole table?

V: Yeah.

M: So how is it that you lock the entire table and you claim that it’s the backup blocking everything?  Haven’t you noticed that the backup only decides to block stuff when this process is running?

V: That does sound a little suspicious when you put it that way.  What can we do?

M: Let me take those TABLOCKs out of there so I can query these tables without locking everything up.

V: I just saw your email signature and I’ve never heard of an MVP before, but I just looked it up.  Are you really one of those guys?

M: Yeah.

V: Wow, so I guess you really know what you’re doing then.

M: I’ve hung a few DBs in my day, sure.

V: Do you mind if I write you again if I have any questions?

M: I suppose that depends on whether you’ll let me change that stupid sp or not.

V: Yeah, go ahead.  I’ll have to have a talk with our devs tomorrow and try to convince them to put this in our next build.  Honestly, I’m only a tech lead here because I’ve been here a few months longer than anyone else on the support team.  We get all this stuff from the devs and they won’t tell us anything.

M: So what about this script I keep hearing about.  The one that tells you how to unconfuse SQL and give you the right spid for a process?

V: That’s a script the devs have and they won’t show it to me either.  I just tell my guys something else so they won’t know I haven’t seen it.

M: Wow, that sucks man.  You do know though that a script like that doesn’t exist, right?  It’s completely bullshit.

V: Yeah, I hate this place.  I’ve been suspecting for a while now they they were lying to me, but what can I do?  I need this job.  I’m just lucky to be working.  I have no idea what I’m doing here.

M: I don’t know what to say to that.  You’re right, you are lucky to be working these days.  But pick up a couple books on your own and learn SQL.  Don’t rely on them.  There are websites, blogs, etc.  I myself run a site.

So in short, I really started to feel for this guy.  He ended the call apologizing 100x for the grief they’ve caused me and he’s going to go out right away and start watching my vids and trying to learn more.  He asked if he could write me with questions, and I said, of course dude, anytime.  That’s what I do.

And y, he sounded about 22 as well.  Turns out I was right.  I asked and he was 21 and this was only his 2nd job and they brought him in from nothing and “trained” him.  I’d say he’s got some work to do.  But I personally consider this exchange a success.

Why can’t voodoo be real?

I had a very interesting talk with a vendor this morning.  Ok, it wasn’t really interesting as much as maddening.  I swear, it’s guys like this that are going to be the death of me.  I know I shouldn’t let them get to me but sometimes they do.  You know how it goes, you try to explain something and it just escalates until you’re upset.  Anyway…

OK, so they were having blocking issues in the DB.  The vendor was trying to convince the app team to turn off the SQL backups because they were interfering with their processes.  That’s when they called me.  Frankly, I can’t believe they called because the vendor was very adamant and this is the kind of thing our app groups do on their own without consulting us at all, so I’m more shocked than anything.  But I knew this was going to be a bad call the second my app guy told me what was going on.

Here’s the conversation as it took place just a couple hrs ago.  It took me this long to regain my strength.

V = vendor

M = me

M: Ok, I’m here now.  So explain to me what’s going on.

V: Well, we’re having severe blocking issues in the DB and after looking at it we noticed it’s the SQL backups causing it.  So what we need to do is stop the SQL backups.

M: Backups don’t hold locks in the DB so there’s no way for it to be causing blocking.

V: Of course backups hold locks.  Everything holds locks.

M: No, they don’t hold locks.  There’s never been a case where a backup has blocked a process.

V: We see it all the time.  And you’ll just have to trust me because I do this for a living.  I’m very familiar with this app and this is how we always fix this issue.

M: You always fix your poor coding issues by having people not backup their data?

V: You can backup your data, but you have to schedule downtime every week to do it… to keep it from blocking.

M: I really don’t want to finish this conversation, but I’m developing this morbid curiosity about where this is leading.

V: What does that mean?

M: Well, you clearly know nothing about SQL and you’re doing your customers such a huge disservice by making them turn off their backups when all you have to do is fix your query.

V: What do you mean I know nothing about SQL, you’re the one who thinks backups don’t cause blocks when I see that all the time.

M: No, what you see all the time is your process causing blocks and people running backups.  Because I’m on your server now and I can see the blocking you’re referring to and it has nothing to do with the log backup, it has to do with your process on spid 201.

V: I thought you were a DBA.  When there’s severe blocking in a system SQL Server sometimes reports the wrong spid.  Our tech lead assures us that this is coming from the backup.

M: Well, if you’re just going to insult me we can end this call right now because I really don’t have time for this.  I can’t teach every moron who writes an app for SQL everything they need to know to support it.

V: Now who’s being insulting?

M: Oh, that would still be you.  Because you know nothing about SQL and you hop on the phone and start calling me an idiot because you don’t know how to troubleshoot a simple blocked process.  Since we’ve been on the phone, I’ve looked at the sp that’s causing this and the problem is simple; you’ve got a hardcoded TABLOCK in 2 of your queries.  That’s what causing the blocking.  Have you ever bothered looking at this process to see what it’s doing?

V:  No, because it’s the backup causing it.  There’s nothing wrong with the code.  We run this at several customer sites and nobody ever complains.

M: Ok, let me try something different here… if SQL’s reporting a different spid for the blocking process, how does your tech lead know that this incorrect spid points back to the backup?  Why couldn’t it be pointing to one of the many other processes running right now?

V: He’s got a script that can tie them together.

M: I would love to see that script.  Maybe I could learn something, huh?  But if SQL itself gets confused about the spid and doesn’t report it right in sysprocesses, then how can he tie it to the right process in his script?  Wouldn’t SQL tie it to the right spid and present it to us the right way to begin with?

V: He won’t show anyone the script because he says it’s too advanced and we don’t need to know how to do those types of things. 

M: Wow, that’s convenient isn’t it?  Let me try something different here.  Because now this is like a train wreck I just can’t turn away from.  When you kill the backups at other client sites, does the process clear up right away?

V: No, unfortunately, the database can’t recover from something that’s been blocked by a backup so we have to kill our process to and then things work fine.

M: My, you’ve just built your own little world in there haven’t you?  You’ve got excuses for everything.

V:  They’re not excuses.  It’s how databases work.  Where did they find you anyway?  Have you ever been to any SQL Server classes? Maybe they should send you to some.

M: No, I’ve never been to any SQL classes.  I’ve never needed them.  And if you call me an idiot again you’re gonna find out what else I can do.  In fact, I’m a hair away from calling the CEO of your company and letting him know what kind of stupid, snide little ass he’s got working for him.  So you’d better watch your tone with me boy and remember I”M the customer.  So you don’t tell me what to do and you don’t get on the phone with me and call me an idiot.  And here’s something else for you.  Before I even got on this call I ripped your access from the box because I don’t want you touching anything on our server.  Nobody’s turning off our backups, and nobody’s ever touching our server.  And I’m actually done with you.  I’m not going to troubleshoot this with you one more second.  Here’s what I want.  I want you to escalate this to your team lead and I’ll fight this out with him.  I want a call with him at the next available opportunity.  You are out of this issue.

V: Look, I’m sorry if you thought I was being

S: Shut up.  I’m done with you.  The next call I get had better be from your team lead.  Good-bye.

I’m telling you guys, if voodoo were real I’d be sticking pins in this guy’s ass for the next month.  And the sad thing is I doubt it’s his fault.  He’s got some jackass yanking his chain and this is probably just some kid who they’ve taught to be this way.  He sounded like he was probably in his early 20’s.  I still haven’t heard from his tech lead, btw.

Dealing with superstition

To be a DBA for any real length of time takes a certain type of personality.  For starters it helps if you’re fairly strong-willed and confident.  In order to deal with a lot of the customers you find yourself coming in contact with it really helps if you can stand your ground.

This is the exact type of guy that we’ve had as the past 3 DBAs before I got here.  The problem is that they were apparently pretty dumb so they spread so much misinformation about SQL and about DBs in general it’s a constant battle for me to undo it all.  Like I said, these guys were dumb, not malicious so I really don’t think they meant to  spread that much bad information… I think they just didn’t know how to pick up a book.

So this situation causes a special problem for me because I’m trying really hard to do things right here but a lot of my customers and vendors just won’t let me.  “Well, the last guy told us this is how it works”.  Yeah, I bet he did, but that’s not right.  And I can tell them it’s not right 100x, but they’ve heard the same tired story for years now so they’re not likely to believe me even after I prove it to them.  And we all know that the guy who says it the loudest is typically believed.  And I’ve heard that our previous DBAs were all quite passionate that things be done a certain way and they really went to bat for what they wanted.  What’s funny is they went to bat for ridiculous things, and then let a lot of really big things slide… but that’s another post isn’t it?

The only thing I can hope for at this point is to stick around longer than the other DBAs did and eventually my rhetoric will start to sink its way into their heads.  I’m thinking one way to do it is to just flatout lie.  Just tell them, yeah, that’s the way it used to be but on this new version they’ve fixed all that.  Something tells me that’ll work out much better than just telling them the last guys were idiots.  Because if I keep doing that, then I might as well be trying to talk the Vatican out of religion.

Are you kidding me?

This one has me going so much I had to blog for a 3rd time today.  I keep finding boxes that have failing backups and other processes.  These jobs have been stopped for weeks in some cases.  And why you ask?  Hah, that’s a good question.  The reason why they keep failing my most learned internet peeps is because the previous DBA was running them under his own personal acct.  He had scads of jobs and other processes running under his acct… as did the DBA before him and the DBA before him. 

Are you people kidding me with this crap?  Ok, for those of you who don’t know enough to be sure why I’m upset, you never run anything under your own acct… or anyone else’s acct for that matter.  Even if you don’t leave this gig, you’ll still be forced to change your password at some point and there’s no way you can keep up with every job, service, ssis pkg, script, website, credential, etc that you’ve got running under your acct.  And then you’ve gotta get everyone else involved to help you find it because you keep getting locked out every 5mins.

When IT was young you could excuse these things because we were still figuring things out then, but there’s just no excuse for this level of idiocy anymore.  Come on guys, straighten up.

Read my mind

Man, talk about your all time screw-ups!!

OK, here’s what happened.  I got a ticket the other day to truncate a table in a prod DB.  Simple enough, right?  So I wrote the guy and said, are you sure you want me to truncate this table?  He said, yeah, it’s taking up over half of our space.  I said ok, just give me a couple mins.  So I connected and truncated the table.

Well, then I get a call like 10mins later and the guy was frantic.  What happened to my data?  Well, you told me to truncate the table.  NOT ALL OF IT!!! So this is a valuable lesson isn’t it?  Always be sure what you’re asking for and never assume.  I of course didn’t assume anything.  We get tickets to truncate entire DBs even all the time so it’s not that big of a deal for us.  Anyway, just be aware that many users have a completely different database vocabulary than you do.

Is this industry fun or what?

A Nice SQL Virus

I’m being forced off of my powershell kick here for a few mins to talk about viruses.

Why do companies refuse to allow scanning exceptions for DB files?  Do you really think that anyone’s written a SQL virus?  I really can’t remember ever having see an actual virus that attacks SQL files.  Actually, I take that back.  There is one virus that attacks DB files and drags performance so far down you can barely use the system.  In fact in my performance realm I actually classify this virus as an official DOS attack against my DB.  It’s called virus scan.  Virus scans are the only thing I’ve seen that specifically target DB servers to bring down the performance.

So if any of you non-SQL admins out there are running AV on your servers w/o exception policies for your DB files, turn yourselves in because you’re officially launching a DOS attack against your own servers.

But I promise I’m not bitter.