All posts by Jen

Announcing Minion CheckDB release date!

productimg_checkdbMinionWare will release Minion CheckDB on Febuary 1, 2017!

Minion CheckDB 1.0

Minion CheckDB is the third piece of our free backup and maintenance tools, rounding out the list with Minion Backup and Minion Reindex. CheckDB will have the same native interface, the same configurability, and the same rich scheduling introduced in Minion Backup. And of course, it will have a the same kind of visionary features that users have come to expect from MinionWare products, like:

  • Rich logging
  • Automated rotating schedules for objects
  • Automated remote CheckDB operations
  • Automated choice of whether databases get a DBCC CheckDB operation, or a series of DBCC CheckTable operations
  • And much more!

Minion Backup 1.3

What’s more, we will also release Minion Backup 1.3 on that date! New MB features include:

  • Our new dynamic naming functionality (called named parts)
  • Improved restore process
  • Additional advances both for their own sake and to support Minion CheckDB

 

Subscribe to the MinionWare newsletter for news and updates about Minion Enterprise, backup and maintenance, and more.

Minion Backup for SQL Server

minion backupWe’re having a Minion Backup webinar on Wednesday, June 1 at 12:00 PM. Register here!

Enterprises have increasingly complicated backup needs.  With different data centers, high availability nodes, development refreshes, and more thrown into the mix, a DBA can almost keep a full time job simply making a backup routine that does everything it needs to do.  But we’ve done that for you.

Minion Backup is a free community tool that has more than enough features to handle even your toughest scenario.

In this webinar we’ll show you how this FREE tool by MinionWare can meet your scalability and HA/DR needs with almost effortless management. We’ll show you how Minion Backup can:

  • Be configured easily for all your servers.
  • Be highly customized without any extra jobs.
  • Be configured to be not only HA node, but also data center aware.
  • Be configured to copy backup files to dev or QA boxes for restore.
  • Backup all your certs with the most secure method available.
  • Dynamically tune your backups so you’re always using the proper number of resources for each DB.
  • Have multiple schedules and retention periods for each DB without having to create multiple jobs.
  • Be configured to redo backups that failed so you don’t have to get involved.
  • And more.

Come see why Minion Backup is almost literally taking the SQL community by storm, and why it’s the new diamond standard for backups in SQL Server.

Meeting registration.

25 things I learned writing commercial software

It’s our job to learn new things.  We’re constantly studying, practicing, refining, etc.  But I’m not sure that I’ve ever learned so much about the different ways people work than I have by writing commercial software.  Now, my free Minion modules don’t cost anything, but that doesn’t mean they’re not commercial software.  They’re released to the public and are becoming quite popular, so they’re commercial in the distribution sense.

And there are things that I’ve learned about SQL and DBAs in general while building these tools.  Here’s a list of some of the things I’ve learned while writing the Minion maintenance modules.  Not all of these were new to me.  Some of them I’ve known for years, but were shoved in my face during this process.  Others I’ve kind of known, and still others never even crossed my mind because I’ve never had to deal with them before.  So come with me on the very rewarding, yet sometimes often quite frustrating journey of commercial software design.

  1. The customer isn’t always right. That may work in the food service industry, but it definitely doesn’t fly in IT.  I’ve found that giving people what they want is a dicey thing because not only do a lot of them not really know what they want, but sometimes they don’t have enough experience to know that what they want isn’t what they need.
  2. Service is key. At MinionWare we pride ourselves on answering support tickets immediately.  We consider it poor service to even let a ticket sit for an hour and in fact most of the time we answer the ticket within 10mins.  I think this is essential because people have to know that their issues are going to be addressed in a timely manner.
  3. You’re proud of your product. You’ve written something that you think everyone will just love.  So you package it up and send it out to the masses.  And as much as you love what your code does for the world, cut that love in half for the public.  Nobody will love your baby as much as you do; at least not in the beginning they won’t.  However, there’ll be some who really get what you do and who love it right away.  Others will take some convincing.  While yet others never get excited about anything.  It’s only DB maintnance dude, how exciting can it be?
  4. People have all kinds of reasons for not adopting your solution. Sometimes it’s warranted, and sometimes it’s just laziness, or not wanting to change.  This is neither good nor bad, it just exists.  Get ready to hear ‘no thanks’ a lot more than you’re used to.
  5. There are so many different configurations and situations people run SQL in that it’s exceptionally difficult to write software to cover all the bases. Minion Backup was more difficult in this respect than Minion Reindex, but there was still some of that for MR.  But there are so many ways people want to add backups to their processes and so many things they need to be able to do that it’s really hard to get it right.  So the question is, have we gotten it right with MB?  Honestly, only time will tell, but I think we objectively did a really good job.  We’ve had some bugs but no major config flaws that I can see.  I think we’re setup well enough for the future of the product.
  6. It has to be as easy to configure as possible. Users don’t like to jump through hoops to make simple changes to software.
  7. No matter what you put in the product, you’ll have forgotten something that someone wants. I forgot to allow NUL backups in MB and a user requested it.
  8. User requests and bug reports are a good thing. It doesn’t necessarily make you a bad coder to have bugs.  You could just have a complicated app with many different complicated situations and you can’t code for everything out of the gate.  But feature requests and bug reports mean that people are using your software and like it well enough to want to see it improved.
  9. That BS you pulled at your last company where the code you wrote was “good enough” simply won’t fly here. Your name is on this code and how embarrassing would it be for someone to comment on a poor portion of your code only for you to have to say that you didn’t feel like doing it right.  Laziness is no excuse for poor coding or design.  Take the time to do it right, even if you have to recode portions several times.
  10. Don’t be afraid to try really outlandish things. IT gets mired in the mundane sometimes.  Turn your product on its ear.  If there’s something that you really want to be able to do, but it seems too hard, or even impossible, then that’s a place for you to shine.  Try sample code for even the most outlandish ideas to accomplish it.  You never know when it’s really not going to be as bad as it seemed.  It may not always work out, but at least you’re trying to tackle the issues people are faced with.  I had a few of these moments in MB.  There are problems we face every day with different backup situations and I wanted to solve.  And I didn’t want to be bound by what’s considered tradition to solve them.
  11. You can’t control who downloads your software. You may have a primarily American market in mind, but you’ll get downloads from all around the world.  Why is this important?  Well, that instantly throws you into different collations, time zone issues, etc.  I got caught in MB by foreign decimals.  I hadn’t counted on that and when I started getting downloads from other countries, backups stopped running because Powershell and SQL handle these decimals differently.  I didn’t know that before I started this.
  12. Test Test Test… then test again. Keep a list of all your edge cases and test every new version against every one of them.  The more you test the better your product.  And formalize it.  Don’t just run it a few times on your laptop and release it to the world.  If you support different versions of SQL then you have to test every feature not only on every one of those versions, but also on all versions of windows they can be installed on.  And if you can, test it against every major service pack.  Microsoft added 3 columns to RESTORE HEADERONLY in a service pack and it broke MB.  It didn’t even cross my mind to test for individual service packs before I started this.
  13. You can’t test for everything. Sometimes there are some ridiculous things that keep your software from being successful and sometimes they’re not anything you could’ve foreseen.  Again, MB has a perfect example.  As it turns out when you’re loading the Powershell SQL module, if you have SSAS installed on the server it has no effect on backups.  However, if you have SSAS installed and the service isn’t started, then it shoots up a warning when you load the provider.  So we found that the warning was taking the place of the data we were expecting and backups were halted.  If you’d have asked me a year ago if having SSAS turned off would affect your backup routine, I would’ve said ‘Hell No’.  Leave it to me to write software that finds this kind of issue.
  14. Every feature request doesn’t have the same weight. I don’t really believe in up-voting feature reqs.  I think if a feature is a good idea then it should go into the product no matter how many people requested it.  Maybe I’ll change my mind when there are 2 million people using my stuff and I’ve got feature reqs coming out my ears, but for now, I look at everything on its merits.  That doesn’t mean though that every request is equal.  I’ve had some pretty ridiculous feature reqs from people who clearly weren’t DBAs and really don’t know the proper way to manage their backups.  These are the requests you don’t give much weight to.  However, this is your opportunity to teach, so help your product shine by showing them the proper way to do things using your product to do it.
  15. Documentation is key. The more you can tell people about your product the more successful you’ll be.  There are people who just won’t read it, but there are others who will comb out every last nugget.  And if you have a particularly sensitive feature, or something that is a little more difficult to configure, then give your reasoning behind designing it the way you did.  Give the use cases for the feature.  This will help people know when to use it and when not to.  And it’ll help them know what’s going on behind the scenes.  The more they know the better everyone is.
  16. You can’t add every feature request.
  17. Use your own software. If you don’t use it, then who will?  And there’s no better way to flesh out bugs, and usability issues.  You should always put yourself in the shoes of your users coming in for the first time.  You’d be really surprised how quirky something you wrote for yourself is.  MB was my private backup utility for years and I had a set of steps I went through to set it up.  I knew them.  I was used to them.  So it didn’t bother me having to do it.  But expecting everyone to go through those steps is ridiculous.  Sometimes you can only make something so easy, but don’t go out of your way to make it hard.  Step out of your own head.
  18. Get plenty of people to test it out for you. This can be hard because you’ve not only got to find someone willing to put beta software on their box, but they’ve got to be the right person.  Building up a group of reliable beta testers can be the difference between life and death.  I’ve had beta testers find some pretty glaring bugs in my software and I’m grateful for each and every one of them.
  19. Seriously ask yourself if you’re actually adding anything to the market. Are you really solving a problem, or putting a really good spin on something? Or just throwing a slightly different version of the same thing out there?  So if you’re not an expert in the field you’re writing the software in, then do some research and find out what already exists and what the biggest issues are.
  20. The internet is a cold, dark place. Writing software is one thing, and getting the word out is another.  You quickly find that coming up with different ways to get the word out isn’t as easy as you’d think.  It takes persistence too.  You can’t just send out a couple tweets and a blog and call it a day.  It takes dedication and a lot of thought to find the avenues that’ll actually tell people about your stuff.  Keep with it though.
  21. Write software with support in mind. Chances are you’ll have to support what you write, and not leaving yourself in a good position will be the death of you.  So make sure you try to anticipate any issues someone could have and write in some debugging mechanisms.  Your customers will love you for it, and so will you.  And don’t make the debug info too hard to get at.  Remember, you’re the one who’s going to use it, so give yourself what you need.  Sometimes your customers will use it and bypass you altogether.  These are the guys we like.
  22. Writing software is one thing, but learning to support it is another. Sure, you may be the genius behind your code, but that doesn’t mean you have experience troubleshooting it.  Sure, you’ve debugged your code many times on your test box, but what about through email with a customer who won’t just let you on his system?  Do you know the right questions to ask?  Do you know the right things to have them to do repro the more complicated issues?  I’ve had to learn how to support my own products and it’s shown me that even my debug mechanisms weren’t what I thought they were.  So I’ve had to improve my debugging scenario and going forward it’ll be first on my mind with every feature.
  23. There’s a fine line between hardcoding things, and having parameters. You can’t hardcode everything, but you can’t have 500 params passed in either.   It’s just too clunky.  So good luck with finding that balance.
  24. Never rest on your laurels. Always be thinking ahead to the next release.  I’ve very happy with the current version of MB and MR, but before the code was released I was already listing enhancements for the next couple releases.
  25. Be honest about your shortcomings. People hate it when you BS them, so don’t even try.  Be honest about how the product works and why.  Not only will people respect you more for it, but you may convert them to your way of thinking.  People who don’t love you won’t love you anyway so you can’t convert everyone, but being honest about your bugs, and your features can go a very long way.  Show them you’re making an honest good-faith effort to write something good.

Improved #MinionBackup and #MinionReindex – new versions!

We released Minion Backup 1.1 and Minion Reindex 1.2 last week! We’ve got a some great new features, and a number of bug fixes.

New features in brief: Minion Backup can now back up to NUL. Minion Reindex has improved error trapping and logging, and new statement prefix and suffix options!

minion backupMinion Backup 1.1

The one page MB Highlights PDF is a good place to start, if you haven’t laid hands on our backup solution yet. That’s just

New feature: You can now take NUL backups, so you can kick start your backup tuning scenario.  For more information, see the section titled “About: Backing up to NUL”in the official product documentation on www.MinionWare.net/Backup/

 Issues resolved:

  • Fixed mixed collation issues.
  • Fixed issue where Verify was being called regardless of whether there were files that needed verifying.
  • Data Waiter port wasn’t being configured correctly so there were circumstances where the data wasn’t being shipped to the other servers.
  • Greatly enhanced Data Waiter performance. Originally, if a server were down, the rows would be errored out and saved to try for the next execution.  Each row would have to timeout.  If the server stayed offline for an extended period you could accumulate a lot of error rows waiting to be pushed and since they all timed out, the job time began to increase exponentially.  Now, the server connection is tried once, and if the server is still down then all of the rows are instantly errored out.  Therefore, there is only one timeout incurred for each server that’s down, instead of one timeout for each row.  This greatly stabilizes your job times when you have sync servers that are offline.
  • Fixed an issue where the ‘Missing’ parameter wasn’t being handled properly in some circumstances.
  • Fixed issue where Master was discarding differential backups in simple mode.
  • Fixed issue where Master wasn’t displaying DBs in proper order. They were being run in the proper order, but the query that shows what ran wasn’t sorting.
  • Master SP wasn’t handling Daily schedules properly.
  • Reduce DNS lookups by using ‘.’ when connecting to the local box instead of the machine name which causes a DNS lookup and could overload a DNS server.
  • SQL Server 2008 R2 SP1 service consideration. The DMV sys.dm_server_services didn’t show up until R2 SP1.  The Master SP only checked for 10.5 when querying this DMV.  If a server is 10.5 under SP1, then this fails because the DMV isn’t there.  Now we check the full version number so this shouldn’t happen again.
  • Master SP not logging error when a schedule can’t be chosen.
  • Situation where differentials will be errored out if they don’t have a base backup. Now they’ll just be removed from the list.
  • HeaderOnly data not getting populated on 2014 CU1 and above. MS added 3 columns to the result set so we had to update for this.
  • Increased shrinkLog variable sizes to accommodate a large number of files.
  • Fixed international language issue with decimals.
  • Push to Minion error handling improved. There were some errors being generated that ended SP execution, but those errors weren’t being pushed to the Minion repository.

More resources:

Minion Reindex 1.2minion reindex-02

If you’re new to Minion Reindex, take a look at the one page MR Highlights PDF to get an idea of what we’ve done with a “simple little index maintenance routine”.

New features:

  • Error trapping and logging is improved. Minion Reindex is able to capture many more error situations now, and they all appear in the log table (Minion.IndexMaintLog).
  • Statement Prefix – All of the Settings tables (Minion.IndexSettingsDB, Minion.IndexSettingsTable) now have a StmtPrefix column. See the documentation on www.MinionWare.net/Reindex/ for details. Note: To ensure that your statements run properly, you must end the code in this column with a semicolon.
  • Statement Suffix – All of the Settings tables (Minion.IndexSettingsDB, Minion.IndexSettingsTable) now have a StmtSuffix column.  See the documentation on www.MinionWare.net/Reindex/ for details. Note: To ensure that your statements run properly, you must end the code in this column with a semicolon.

Issues resolved:

  • Fix: Minion Reindex failed when running on BIN collation.
  • Fix: Help didn’t install if Minion Backup was installed.
  • Fix: Minion Reindex didn’t handle XML and reorganize properly.
  • Fix: ONLINE/OFFLINE modes were not being handled properly.
  • Fix: XML indexes were put into ONLINE mode instead of OFFLINE mode.
  • Fix: Situation where indexes could be processed more than once.
  • Update: Increased Status column in log tables to varchar(max).
  • Fix: Status variable in stored procedures had different sizes.
  • Fix: Wrong syntax created for Wait_at_low_priority option.
  • Fix: Reports that offline indexes were failing when it’s set to online instead of doing it offline.

More resources:

Get into our Tuesday precon at the PASS Summit

I'm Speaking Graphic_LargeWe’re just two weeks away from the PASS Summit in Seattle,  and there is most definitely still time to get into our Tuesday pre-conference session, “The Enterprise Scripting Workshop“. That’s a full day of training with 100 of your closest friends* for just $495.

SESSION DETAILS

Abstract:

The database administrator (DBA) life can be frustrating: You rarely have time to innovate because the same tasks fill up your time day after day. Your users are unhappy about how long it takes to resolve “simple” tickets. You need to put big items on hold to manage special requests. As careful as you are, mistakes creep in the busier you get.

In this pre-conference workshop, learn how to develop enterprise scripts with a huge range of uses. A good set of reusable scripts can reduce task time from hours or days to just a few minutes, and eliminate mistakes from your environment.
• Enterprise philosophy: Tackle simple tasks with the whole environment in mind.
• Single data store: Define the benefits and uses of a single central database for common-use data and metadata.
• Choice of tools: Choose the best tool (e.g., PowerShell, T-SQL, SSIS) for the job.
• Environment ground work: Prepare your environment for enterprise scripting.
• Real-world scripts: Work through dozens of enterprise scripting issues (e.g., alerting, error handling, multiple SQL versions) as you develop a real enterprise script in class

This session is for DBAs with a basic understanding of PowerShell. It’s for anyone who touches backups or security, maintains databases, troubleshoots performance, monitors disk space, or any of a hundred other DBA tasks. Enterprise scripting is for anyone who has more tasks than time.

Session Title:      The Enterprise Scripting Workshop

Session Code:    DBA-298-P

Session Date:     10/27/2015

Session Room:  6A

PRE-CONFERENCE SCHEDULE:

7:30 – 8:30 Continental Breakfast
8:30 – 10:00 Pre-conference Sessions
10:00 – 10:15 Refreshment Break
10:15 – 12:00 Pre-conference Sessions
12:00 – 13:00 Lunch
13:00 – 14:30 Pre-conference Sessions
14:30 – 14:45 Refreshment Break
14:45 – 16:30 Pre-conference Sessions

*May not be exactly 100. Might not be actual closest friends, yet.

Sep 17 #pass24hop session: The Enterprise Scripting Workshop

On September 17 7:00GMT, I’ll be giving a sneak preview of our PASS Summit precon, The Enterprise Scripting Workshop, for 24 Hours of PASS. Here’s the registration link.

PASS_24HOPreview_Speaking_250x250

Abstract:
The database administrator (DBA) life can be frustrating: You rarely have time to innovate because the same tasks fill up your time day after day. Your users are unhappy about how long it takes to resolve “simple” tickets. You need to put big items on hold to manage special requests. As careful as you are, mistakes creep in the busier you get.

This is a preview of the PASS Summit pre-conference session. In the pre-conference workshop, learn how to develop enterprise scripts with a huge range of uses. A good set of reusable scripts can reduce task time from hours or days to just a few minutes, and eliminate mistakes from your environment.• Enterprise philosophy: Tackle simple tasks with the whole environment in mind.
• Single data store: Define the benefits and uses of a single central database for common-use data and metadata.
• Choice of tools: Choose the best tool (e.g., PowerShell, T-SQL, SSIS) for the job.
• Environment ground work: Prepare your environment for enterprise scripting.
• Real-world scripts: Work through dozens of enterprise scripting issues (e.g., alerting, error handling, multiple SQL versions) as you develop a real enterprise script in class

This session is for DBAs with a basic understanding of PowerShell. It’s for anyone who touches backups or security, maintains databases, troubleshoots performance, monitors disk space, or any of a hundred other DBA tasks. Enterprise scripting is for anyone who has more tasks than time.

To AutoGrow or Not?

Note: This is a repost of an older blog that’s still applicable. We’ve updated it with a note or two on how Minion Backup – our free backup solution – and Minion Enterprise – our management solution – can help.

I just got this question in the user group and thought I’d write a blog instead of just answering a sub-set of users who could benefit from it.  The question was:

I have customized the values of the Auto growth according to the size of the database and the rate at which it grows. I have noticed that Auto growth kicks in about every 3 months – 6 months on an average. Is that OK? I have read articles where the advice on it ranges from “Auto growth is OK” to “Auto growth should kick in only during emergency”.

This is one of those topics that comes up again and again, unlike AutoShrink which I hope is settled by now.  I suspect it keeps coming up because there’s no real solid answer.

Ok, so whether or not to AutoGrow your files.  I’m going to talk about both data and log files together unless there’s a difference.  So unless I call one out over the other, I’m talking about them both.

Yes. And, no.

You should definitely use AutoGrow.  And you should definitely NOT use AutoGrow.  That’s my way of getting around saying “it depends”.

It depends on a few factors really.

  1. What you’re going to do with the files.
  2. How big your environment is.
  3. How many other files are on the drive.
  4. How much activity is on the files.
  5. Monitoring method

Maybe there’s more, but that’s all I can think of right this second, but you get the idea.  Ok, so let’s go through them one at a time.

1.     What you’re going to do with the files.

From time to time I step into a shop where the DBAs have bought into this idea that AutoGrowth is bad so they have some job setup to monitor the size and they grow the files manually.  Now while that sounds like a good idea, it can cause more problems than it solves.  Let’s look at a scenario I’ve encountered more times than I care to try to count.  You get a request from a group to restore the DB to a dev or maybe a QA box so they can do some testing.  This is a common scenario, right?  I’ve done it plenty in almost every shop I’ve been in.

So you go to do the restore and it fails telling you that there’s not enough space on the drive.  You look and there’s not that much data in it so it should fit on the drive right?  No, not right.  The drive has to support the size of the file, not the size of the data.  So if you’ve got a 50GB drive, and a 100GB file it will fail even if there’s only 20GB of data in that file.  So now what do you do?  Do you go beg the SAN guys for more space or do you manage your files in smaller increments?

With AutoGrow you can set your own growth rate, so you can have it grow the files at whatever interval you want.  So how is manually growing the file going to add anything to the equation here?  And with Instant File Initialization (IFI) you don’t even have to worry about incurring the cost of zeroing out the file unless it’s a log.

Now, for log files specifically, I know some of the top experts say that you can have perf problems if you grow your files too much and get too many VLFs, but honestly that problem really doesn’t come up that often.  And logs are very volatile.  Lots of things log activity that you don’t realize and I wouldn’t want the log file to rely on me.  And again, I can’t stress too much that it really matters what you’re going to be doing with the files.  If you’ve got an extra 60GB of space in your log because you’re afraid of VLFs, then you’ll need that extra 60GB on every other system you plan to restore the DB on.  And you may not be afraid of the VLFs on those lower-level servers.

Minion Backup logs VLFs before every log backup, so you can track how many there are.  This can help you see if you’re growing at the correct rate. And, you can shrink the log to the size you want to help correct any VLF issues that may occur. Even better: MB lets you shrink them only if they’re over a certain size.

2.      How big your environment is

Now let’s talk about large enterprise environments.  I tend to be in really large shops with hundreds or thousands of servers.  And I don’t know about you, but I don’t wanna spend my time managing file growths.  Consider my last environment where I had over 900 servers with over 4,000 DBs spread across all of them.  And that was just prod.  I’m not going to do that kind of analysis on all of those servers and manually grow all of those files.  And it’s honestly just ridiculous to even try.  There are 2 ways I could solve a problem like this.

I could develop a process where I monitor the free space in all the files, and when it reaches a threshold it grows the file by a certain amount.  Hell, that’s just a homegrown version of autogrow isn’t it?  So that’s not a solution really.

I could also use autogrow on some of my boxes and manually grow my really important or trouble boxes.  And again we’re back to “it depends” aren’t we?  What we’re saying here is it’s ok to use autogrow on some servers and not on others, which means there’s no solid answer.  You just can’t spend all your time growing files.  Use autogrow here unless you have a reason not to.

3.     How many other files are on the drive?

This argument may or may not have any teeth… it just depends on how you look at it.  The main reason for manually growing your files on drives where you’ve got a lot of other files is for the fragmentation.  And here I’m talking about fragmentation at the filesystem level, not inside the files themselves.  If you’ve got you files on a drive with lots of other files and they’re all growing, then they’ll be growing over each other esp if they’re growing in smaller increments.  So you could fragment your drive pretty easily and that can definitely cause perf issues.  So the solution is typically to manually grow the files to a larger size so it reduces the amount of fragmentation you create when they do grow.  And that does have merit, but why not just set the AutoGrow setting higher then?

I can see a reason why you wouldn’t.  If there are a lot of DBs sharing that drive and they all grow fairly often, then you wouldn’t want to AutoGrow it to a certain size and have it fill up too much of the drive and starve the other DBs.  The most logical way around this issue though is twofold:

AutoGrow at smaller increments.  Unfortunately, this may put you back in the fragmentation scenario though.  If you go this route then you need to defrag the drive on a regular basis and you should be ok.

Split those DBs off onto their own drives.  This is the best solution because you get stuff for free.  Things like simplified space mgmt., 0% fragmentation, and I/O isolation are all things that come along for the ride when you put DB files off onto their own drives.

Minion Enterprise allows you to see and configure your file growth rates across dozens or hundreds of servers, centrally.  As long as you’re there, you can also see exactly where each database file resides on each server.

However, all that said, if you can’t put the files on their own drives and you’re really afraid of starving the other DB files, then your only real choice may be to monitor the size and grow manually.  But this shouldn’t be the norm if you’re in a big shop.  Keep this kind of activity to a minimum if you can help it.

4.        How much activity is on the files.

This one is almost like the other one, only this doesn’t necessarily rely on what else is on the drive.  This counts even if the file is on its own drive.  If the file grows a lot every day or every week, then you don’t want to take a chance on missing an email alert or whatever else you use and having the file fill up because you didn’t grow it.  So while there may be some exceptions, my skills are better spent elsewhere than growing files manually.

5.        Monitoring method

Many shops monitor with 3rd party tools and those tools monitor disk space.  However, none of them are smart enough to know the difference between a full drive and a full file.  You could have a 100GB file with a 99GB data file on it and the alarm will trip even if the file is only 3% full.  And depending on whether or not your monitoring team is friendly, they may or may not help you out by either turning off the alarm on that drive, or doing something so that it knows something about the space in the file.  I’ve honestly worked with both friendly and unfriendly teams.  So I could either setup an outlook rule to ignore all space alerts (bad idea) or shrink my file back again so it didn’t trip the alarm.

Minion Enterprise is a management solution – not technically a monitoring solution – but nevertheless, it collects data on drive space and  file utilization. And, it comes with configurable drive space alerts.

Conclusion

So you can see there are several factors involved with this decision and chances are you’ll have a mixed solution.  I’ve worked in shops where I never managed space at the file level, and shops where it was very necessary, and everything in between.  For me #1 above is one of the biggest deciding factors.  I’m constantly fighting DBAs growing files a lot to be proactive and then we can’t restore to any of the other environments.  Even DR becomes an issue because you have to have that space anywhere you restore those DBs.  And that’s a lot of extra space to keep on hand for such little return.  Don’t get me wrong, I’m not a big fan of thin provisioning either.  I think that’s going a bit far, but it’s basically the same thing at the SAN level.  This provisioning is AutoGrow for the LUN itself.  And the biggest problem I have with it is that they tend to not grow it enough or they set the threshold too high so the file fills up and brings the DB down while you’re still waiting for the LUN to expand.  If they can get it right though it’s not the evil it used to be.  So what we’re really doing with AutoGrow is we’re thin provisioning our DB files.  And that’s actually much easier with IFI because they expand in just a couple seconds.  That’s only for data files though.  Log files still have to be zeroed out so you can run into the issue now and then where the log file is still growing when the process runs up against the end of the current file and everything stops.  Hey it happens.  Those are the cases where you might consider manually growing your log files.  These would be more DSS type systems where it’s unlikely that you’ll restore it to a different box.

Having huge files can also slow down your DR plan.  If you’ve got a huge log file and a 30min SLA, you could easily spend more time than that zeroing out your log file.  So you’ve orchestrated that you’ll miss your SLA just by trying to make sure you don’t run into an almost non-existent VLF issue.  So you’ve got to consider that too.

 

So anyway, I hope this helps you at least consider the different factors involved in making this decision.  Leave me comments if I’ve messed something up really badly.  Or if I’ve gotten something really right.  Hell, just tell me I have great hair and call it a day.

minion backupminion enterprise

“What users are in this group?”

minion enterpriseWe solved this question.

Update: Sign up for one of our Minion Enterprise demos this coming Friday, July 3!

Minion Enterprise collects SQL Server login data, as well as Active Directory information, for an entire enterprise. The AD expansion module ties this data together to provide so much insight:

  • Find out what users are in a Windows group…especially those groups that have sysadmin privileges!
  • List all users that have SA rights on any instance in the environment.
  • Discover which SQL Server instances a specific user has access to, and via what groups.
  • Filter by environment, location, SLA, server, login type, or any combination of the data available.

These are the exact questions we’ve always needed answered, in every single shop. So, we know this will be immensely useful in your shop.

One client was recently able to reduce their SQL access on one server by two-thirds. They simply used the AD expansion module to identify the rogue group with hundreds of members, and removed that group’s rights.

Take a look at the AD expansion module demo below, and then get in touch for your own 90 day trial license of Minion Enterprise.

 

Minion Backup intro webinar June 3

minionware_logoMinion Backup 1.0 is up and available for download as of now!

Minion Backup by MidnightDBA is a stand-alone database backup module.  Once installed, Minion Backup automatically backs up all online databases on the SQL Server instance, and will incorporate databases as they are added or removed.

Join the Minion Backup webinar on Wednesday June 3

Register today for our webinar, Introducing Minion Backup on Wednesday June 1 at 12:00 PM CDT. Sean will introduce Minion Backup, walk through demos, and take questions.

We released Minion Backup

It’s awesome. It’s huge. We actually managed to get everything we planned into version 1.0. Not everything we wanted, mind you: there’s still half a ton of features we have on the docket for the next few versions. But what we have done is still massive.

One short blog post won’t cover how revolutionary (yes, we’re serious: revolutionary) Minion Backup is. One job for all schedules, yes. Availability Group aware, check. Copy, move, mirror, compress, encrypt backups. Dynamic backup tuning. Backup archival. Custom retention settings. Extensive live logging. And on and on. Since we couldn’t cover it all here, we wrote 132 pages of documentation (available in DOCXPDF, and a zipped RTF), including a favorites feature list, a quick start, how to-s, and more.

While you’re at it, take a look at our several tutorial videos on MidnightDBA.com (or at YouTube.com/MidnightDBA if you prefer).

And oh by the way, what’s with “MinionWare”?

MidnightDBA is the banner for our free training. MidnighSQL Consulting, LLC is our actual consulting business. And now, we’ve spun up MinionWare, LLC as our software company. We released our new SQL Server management solution, Minion Enterprise, under the MinionWare banner. And now, all the little Minion guys will live together on www.MinionWare.net.

Minion Reindex, Minion Backup, and other Minion modules are, and will continue to be free. Minion Enterprise is real enterprise software, and we’d love the chance to prove to you that it’s worth paying for. Get in touch at www.MinionWare.net and let’s do a demo, and get you a free 90 day trial!