All posts by Sean McCown

I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.

MultiCast Mirroring

I was talking to a vendor the other day, and he was saying that he was trying to do a 3rd party solution that would allow you to mirror a single DB to several servers at once. I’ve officially coined the phrase MultiCast Mirroring. I think this is the next logical step in the mirroring process, and while it wouldn’t be practical to use it with safety on, you could use it to keep several types of systems more or less in synch. Of course, I would expect that at least one of the mirroring sessions would have safety on, but not all.

I know what you’re all thinking… if you need to do something like that then why not just use replication? Well, that’s actually a significant question and as we’ve all seen, repl and mirroring aren’t the same thing at all. They’re different approaches to similar problems, but they do have major differences. I’m not going to go into all the tired arguements right now, but most of us know what they are. Let me just say that repl adds a complication that doesn’t need to be there.

One interesting thing is that I’m going over the architecture of what would make this possible, and under the covers, it would need to look a lot like repl. To avoid any reading conflicts, etc with the primary log, you’d prob end up shipping the single log to like a log distribution server where any server that wants to attach itself to it as a mirror can do so. So now we’re looking at kind of a publisher/subscriber type of method. It’s still easier to keep up with than repl. though.

Anyway, this is all just my rambling, but it would still be interesting to see something like that.

It Pays to be Paranoid

You know, sometimes it just really pays to be overly paranoid, esp about backups. Here’s the deal…

I’ve got my home web server and database. I had an array controller start acting up on me about a month ago and I knew I was gonna have to replace it. So I started the process of making sure I had good backups of all my data in case the controller went out before I had a chance to fix it. I copied my DBs and websites to a different partition on the same box. I also backed them up to my NAS. I also copied them over to a different server. And just to be sure, in case the backup was corrupt (which is a real possibility with Windows backup), I copied the raw files to the NAS as well. OK, so I’ve got all of this paranoia going on here, but this is very important data to me and I just can’t afford to lose it.

So I got my array back up and everything up and running again. No problems. Now, this weekend, I lost a drive on that array. And of course, I’m using Raid-0. Why? Well, because disk is expensive and this is my home system, and sometimes it’s hard to justify the extra cost of making things redundant. Well, I’ve been running backups to the NAS pretty much every day, so I knew I was protected. Unfortunately, I found that my NAS had lost a disk too… and you guessed it… Raid-0 as well. OK, I could ordinarily go to that other server that I copied it to from the previous outage, but I’m in the middle of rebuilding that server for my new Exchange server. So that data’s no longer there… great.

Now it looks like I’ve lost all my data. I can’t get the array back, and I’ve lost something like 12yrs of data. It’s a complete disaster. Well, last night after talking to one of those disk recovery folks, I was about to pack up my array and send it to them for the $3,000 data retrieval process, when I looked at the other partition on my server and there it was… the data I had copied over from the previous meltdown! I couldn’t believe it. Furtunately, my data doesn’t change often enough to really worry about a couple weeks going by between backups, so I haven’t lost anything at all.

Let me just say that when it comes to backups, it pays to be extremely paranoid. If your data’s that important, act like it. I thought that my NAS and other server would be good enough, but they clearly weren’t. So always plan for a multiple failure if it’s at all possible. You could have electrical problems that knock out more than one server, you could have a flood or other natural disaster, etc. I’m serious here. If the data you’re protecting is that important, treat it as such. In fact, treat it like a newborn baby. Protect it and never let anything happen to it.

I’ll definitely be revamping my backup strategy at home so nothing like this ever happens again. I guess the first part of that is to stop being so cheap and get off Raid-0. I just can’t afford that anymore. So you guys learn from my almost misfortune and take a good hard look at your backups with a paranoid eye. Develop impossible scenarios and see if your strategy can overcome them.

Good Luck!

Cut and Paste Code

This is just a little reminder to not be so lazy.  I know it’s quick and easy to cut and paste code, especially for reports.  It saves a lot of time and you can really burn through projects that way.  But you just have to consider what you’re doing.  Just because a section of code worked well before, that doesn’t mean that it’ll work well here.  Look before you leap.  If you want to cut and paste code, go ahead, but at least look at it to make sure it’s actually what you need. 

Recently, I ran across a group of SPs that were clearly from the same template.  They all pulled 150+ cols into #tables, and used only a couple of them.  The theory behind it was the original author needed a few cols, and the rest of the authors only needed a couple, but since they were included in the select * the original guy pulled in, they decided to just leave it at that.  It’s nothing but laziness people.  Look at what you need and get it.  A DB is like a buffet.  You can eat all you want, but eat all you take.  If you need logic from a query, that’s fine, but you should only use that query to help you cut some of your work, not to replace it. 

So, do the work, think about what you need, and do what’s right.  And another thing… cut and paste code leads to some other really bad things.  Like in the case of this code I just fixed.  Not only did the SP get cloned, it wasn’t good code to begin with.  So, instead of the other report writers figuring out how to write the queries themselves, and possibly stumbling on a better way, they just pushed the same bad code out there.  Don’t just assume that because code is published in the DB, that it’s good.  Take the code if you need it, but only take what you can use, and definitely check it somehow to make sure it’s written well to begin with.  If you don’t know how to check it, there are plenty of resources.  Start with your own DBAs, and if you can’t get any love there, try the internet… books… training CDs… user groups… seminars… newsgroups, etc…

Database Spam

I love this. I logged into my Blogger account about 3 weeks ago to post a blog and they had a message up saying that they had a new version with all these new features. I upgraded because it sounded cool, and the first thing they did was mark my blog as spam and disable it.

I filled out the form to get it turned back on and it just came up today. I always pays to upgrade.

Don’t Blow it Off

From time to time I get saved only by the grace of good practice. I’ve got this server that has had fairly unreliable backups for a while now. It’s really hit or miss as to whether the whole thing will be backed up on any given day. Yesterday was one of those days when the backups failed. It was very tempting to just let it go and have it pick up the backups the next day, but I bit the bullet and did what it took to run the backups by hand instead. What can I tell you… last night, the NOC guy who runs the ETL process messed up and hosed 2 DBs on that server. At 2:30am I got a call that both of those DBs had to be restored. I was able to restore and get things back on track in just a few mins.

Take a page out of my book and never blow off backups. You never know when you’ll be called on to restore. Anything can happen at any time and you need to be ready. I know sometimes it’s a beating to make sure the troublesome backups complete, but one day it’ll really save the day… and maybe your job.

Remember, a DBA is only as good as his last backup.

Ken’s New Book

I just got my copy of Ken Henderson’s new book and dispite everything I have to do, I’ve already started tearing into it. Understand though the he just edited it, he didn’t write it. This promises to be every bit as significant as his other books though and it’ll be the next book I review. I just pushed it up in the queue (since MSPress is taking so long to send the others I’ve ordered).

Anyway, once I get it posted, I’ll let you guys know.

Upgrade Madness

We recently ran into a very interesting anomoly here upgrading to Yukon. We have the following query.

Select * from DWStsging.dbo.history

Of course, you all know the 3 part naming convention, so you know that DWStsging is the database name. However, what you may not know is that it’s misspelled. The real name is DWStaging, and the one in the query is mistyped.

So what makes this interesting? SQL2K runs it just fine, and has for months, while Yukon spits it back at us. It’s these little things that upgrading interesting.

These are the kinds of things I really don’t mind though. Sure, it’s nice that 2K does what you mean, and not necessarily exactly what you say, I’d rather my DB not make assumptions like that. So if anything, Yukon is forcing us to clean up our code where we didn’t even know we had problems.