Category Archives: MidnightDBA News

Happy Birthday Minion

Update: Our birthday is past and the giveaway is over, but you can still get a 90 day trial over on MinionWare.net. And if you like, check out the recording of the Welcome to Minion Enterprise webinar that we held right after!

It’s been a year since we officially launched MinionWare and launched our flagship product, Minion Enterprise.  Since then we’ve hit many SQL Saturdays, and a few other events, to spread the word.  And things are looking bright!  We’ve been received openly by the community we’ve given so much time to, and we’re finding our foothold as a vendor.

You guys know our business model: Give away as much as you can.  We started by giving away our world-class maintenance tools (Minion Backup and Minion Reindex), and we’ve committed ourselves and our company to only making them better.  With the excitement we feel about the upcoming release of Minion CheckDB and the fact that we just passed our 1 year anniversary, we’ve decided to do something bigger…give away Minion Enterprise.

From now until 5:00pm (Central Time), on July 15, 2016, anyone who emails us a request will get 3 free Minion Enterprise licenses. 

We really want to say thanks to the SQL community worldwide and we couldn’t think of a better way.  Somehow merely saying thanks just didn’t seem big enough. So, thanks…and have some free enterprise management software for life.

How about some free licenses?
How about some free licenses?

Of course there are just a couple caveats so see the restrictions below:

  1. Email us before 5:00pm Central Time on July 15.  If you’re even 1 minute late, that’s too bad, because the offer is over.
  2. This is available for the current version only.  Free licenses are eligible for patches and service releases of the current version, but not upgrades.
  3. Support will be offered for 3 months.  Afterwards a support contract will need to be purchased.
  4. Any additional licenses will need to be purchased.
  5. Licenses are not transferable to any other companies.

Minion CheckDB Beta

Come meet Codex!
minion checkDB
We’ve had many of you asking to be part of the Minion CheckDB beta and now is the time. We’re putting the finishing touches on the 1st beta and it’s looking great with some fabulous features.
So this is the open call for beta users. If you’d like to meet Codex before anyone else then send me an email.
We have some requirements though. We don’t want dead beta users. This is your chance to shape the product and we want to hear from you. So if you’re serious about putting the product through its paces then we definitely want you. So you should be ready to provide real feedback, report bugs as you find them, and work with us to fix them.

That’s it. Just be ready to work with us. Many of you have been part of our betas before so you know we’re very responsive and we do our best to give you the product you want to use. We’re going to try to update the beta monthly, but possibly more often if we have an important feature we need to get into your hands.

We’ve got to finish up some details, make a video, and maybe some base-level docs so we’re going to get it into your hands probably late next week, but we want to know who’s going to be in the program. So don’t wait, get your email into me soon and we’ll let you know within a day or so whether you’ll be accepted into this cycle. It’s going to depend on whether you’re going to be active.

Chicago Mafia Will Rise Again in March

This is a critical time in Chicago’s history. At no other time have they been in more danger of having the mafia rise up and take over the city. Gangsters like Capone are going to be more powerful than ever within the next 2 weeks and there’s little the police can do to stop it. That doesn’t mean that they can’t be stopped. What we need are some really strong enterprise scripters out there. DBAs who know how to get the job done so they can go to lunch on time. With all this extra DBA presence in local restaurants on a regular basis, the mafia won’t have a chance to take hold. This is critical people, don’t let this happen. Sign up for our Enterprise Scripting Precon. The life you save may be your own.

You can sign up here: https://www.eventbrite.com/e/the-enterprise-scripting-workshop-a-sql-saturday-chicago-precon-tickets-19917182830

If you let this opportunity go by, then we’re not responsible for what happens to the city of Chicago, or maybe even your family.

A Very Heated Argument about Backup Tuning in Minion Backup

A couple weeks ago we here at MinionWare got into a very heated argument that lasted most of the morning and part of the afternoon. The argument was around the backup tuning settings in Minion Backup (MB), and how they should work vs. how they actually work.
The problem came about because Jen was doing some testing for her first MB session at a user group. She came across an issue with the tuning settings when she added the time component to the Minion.BackupTuningThresholds table. She noticed that she wasn’t getting the tuning settings she thought she should get when she was trying to tune for a specific time of day. So naturally she assumed I was stupid and filed it as a bug.

In actuality though it’s doing exactly what it’s supposed to, and it’s following the letter of the Minion Backup law. That law is “Once you’re at a level, you never go back up”. Let me show you what I mean.

Precedence in the Tuning Thresholds table

Take a look at this sample Minion.BackupTuningThresholds table.

TuningThresholds

Ok, in the above table we’ve got some tuning rows. This is a truncated version of the table, but it’s all we need to demonstrate precedence. We’ve got two rule sets here; one for MinionDefault (the row that provides all the default configuration settings), and one for MinionDev (a specific database on my server).

  • MinionDefault is a global setting that says unless the DB has an override, it’ll take its rows from here.
  • MinionDev is the only DB on this server that has an override, so it’ll take its settings from the MinionDev rows.

At the most basic level, the precedence rule states that once there is an override row for a database, that database will never leave that level…it will never default back to the default row. So in this example, MinionDev is at the database level for its settings, so it will never go back up to the more generic MinionDefault row. Once you’re at a level, you stay at that level.

A “Zero Row” for every level

I’m going to explain how these rows work, and why they are the way they are. Notice that for both levels (that is, for the MinionDefault rows, and for the MinionDev rows), there is what we call a zero row. This is where the ThresholdValue = 0. The zero row is especially important for the MinionDefault row, because this is what covers all DBs; it’s quite possible that you could get a database that’s less than your lowest threshold value.

In the above table, the lowest (nonzero) threshold value for MinionDefault is 20GB. That means that no DBs under 20GB will get any tuning values. Without any tuning values, the number of files would be NULL, and therefore you wouldn’t be able to backup anything…they wouldn’t have any files. So setting the zero row is essential.

And, since each DB stays at that level once it’s got an override, then whenever you put in a DB-level override it’s an excellent idea to give that DB a zero row as well. It may be 50GB now, but if you ever run an archive routine that drops it below your lowest threshold, then your backups will stop if you don’t have that zero row to catch it. Did I explain that well enough? Does it make sense?

That’s how the rule is applied at a high level between DBs. Let’s now look at how it’s applied within the DB itself.

“Zero Rows” within the database level

As I just stated above, you should really have a zero row for each database that has an override row (you know, where DBName = <yourDBname>).

Let’s look at MinionDev above. It has a BackupType=All set, and a BackupType=Full set. The All set takes care of all backup types that don’t have backup type overrides. So in this case, the All set takes care of Log and Diff backups, because there’s a specific override for Full. Get it? Good, let’s move on.

Notice that MinionDev has a zero row for the All set, and a zero row for the Full set. This is essential because following the rules of precedence, once it’s at the MinionDev/Full level, it doesn’t leave that level. So again, if there’s a chance that your database will fall below your lowest tuning threshold – in this case it’s 150GB – then the backup will fail, because there are no tuning parameters defined below 150GB. This again is why the zero row is so important: because it provides settings for all backups that fall below your lowest tuning setting.

And, if you were to put in a BackupType=Log override for MinionDev, it would also need to have a zero row. I could argue that it’s even more important there because it’s quite possible that your log could be below your tuning threshold.

So now, our Argument

That’s how the precedence actually works in the Minion.BackupTuningThresholds table. The argument started when Jen thought that it should move back up to the All set if a specific BackupType override falls below its tuning threshold. So in other words, in the above table, she wouldn’t require a zero row for the MinionDev-Full set. Instead, if the DB size fell below the 150GB threshold, she would move it backup to the MinionDev-All set, and take the lowest tuning threshold from there.

She said that it wasn’t in the spirit of the precedence rules to make the setting quite that pedantic. So after hours of arguing, drawing on the board, making our case, sketching out different scenarios, etc… we just kinda lost steam and moved on, because she had to get ready for her talk.

The point is though that this is the way it currently works: once it’s at its most specific level, it stays there. So, if you have tuning settings for specific backup types, you’d be really well served to have a zero row for each one just in case.

And I’ll also note that BackupType is the lowest granularity. So, Day and Time (another config option in this table) have nothing to do with this setting. You need to concentrate on the DBName and BackupType. Everything else will fall into place.

Final Caveat: We break the rule (a little)

Now, I know it sounds like a contradiction, but there is just one place where I break this rule. I call it the FailSafe. With the FailSafe, it’s possible to have specific overrides and still get your tuning thresholds from the MinionDefault zero row. Here’s why:

This is a rather nuanced config in Minion Backup, and it’s fairly easy to get something wrong and wind up without a backup. I didn’t want that to happen. So, if you do something like leave your zero row out for an override level, and your DB falls below your lowest threshold setting, you would wind up without any backup because there isn’t a number of files to pass to the statement generator.

Failsafe says, if you screw up and don’t have a tuning setting available, MB will grab settings from the MinionDefault Zero Row.

In this situation, I kick in the FailSafe mechanism, which pulls the tuning settings from the MinionDefault zero row. At least you’ll have a backup, even if it’s slow.

(That was one of Jen’s arguments: that a FailSafe is a great idea, but she wants it to come from the DB-All set instead of the MinionDefault-All set. I don’t know, maybe she’s right. Maybe that’s more intuitive. I’ll have to think about it. It wouldn’t be that big of a change really. I could walk up the chain. In the above table I could try the MinionDev-All zero row and if that doesn’t exist then I could use the MinionDefault-All zero row. What do you guys think?)

So why not just hardcode a single file into the routine so that when this happens you’re backing up to that single file? The answer is: flexibility. Your MinionDefault zero row may be set to 4 files because all your databases are kinda big and you don’t ever want to backup with fewer than that. So, set your MinionDefault zero row to something you want your smallest DB to use. If that’s a single file, then ok, but if it’s 4 or 6 files, then also ok. That’s why I didn’t hardcode a value into the FailSafe: It’s all about giving you the power to easily configure the routine to your environment.

Takeaways:

  1. The precedence rules are followed to the very letter of the law.
  2. Once a database is configured at a level, it stays there.
  3. The configuration level is specific to DBName, and then (at the next most specific level) to the DBName and BackupType.
  4. Whenever you have database-level override row, always have a zero row for it.
  5. Whenever you have a BackupType-level override, always have a zero row for it.
  6. The FailSafe defaults back to MinionDefault Zero Row, if a level-appropriate setting isn’t available.

Ok, that’s it for this time. I hope this explanation helps you understand the reasoning behind what we did.

ServerLabels in Minion Backup

There’s a great way to increase the effectiveness of your backup and HA strategy: use the ServerLabel feature in Minion Backup.

The problem with most backup solutions is that they don’t take AG failover into account.  Here’s a common scenario to show you what I mean.

Let’s say you’re backing up to \\BackupNAS\SQLBackups.  Most of the time, your backup routine will append the server name and probably the database name to the path.  There are other things that can get added, but we’ll keep this simple.  So your backup path winds up looking like this instead: \\BackupNAS\SQLBackups\Server1\MyDB.  The problem comes when you’re running an AG and you’re either taking backups on different nodes or when your backup node fails over and the backups continue on a different node.  Either way you’re stuck with your backups being in two different locations.  Here’s what I mean.

Before Failover:

Full backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB

After Failover: Log backup – \\BackupNAS\SQLBackups\Server2\MyDB Log backup – \\BackupNAS\SQLBackups\Server2\MyDB Log backup – \\BackupNAS\SQLBackups\Server2\MyDB Log backup – \\BackupNAS\SQLBackups\Server2\MyDB

Fail Back to Original Node:

Diff backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB Log backup – \\BackupNAS\SQLBackups\Server1\MyDB

So you can see that there are different backups in different locations.  And the log chain starts on Server1, then moves to Server2, and then back to Server1.  This can make it very difficult to build a restore statement if you don’t really know where your files are going to be.  And also, if you look above you’ll also see a diff backup was taken once it failed back to Server1.  But if Server1 wasn’t the primary node, then the diffs would be taken on another server which would add a 3rd one into the mix.

This exact scenario is what we have solved in Minion Backup.  With MB you can define a ServerLabel that gets used instead of the server name.  Let’s say we define the ServerLabel to be ‘AGListener1’.  We can do that with a simple update statement like this:

 

UPDATE Minion.BackupSettingsPath

SET ServerLabel = ‘AGListener1’

 

Now every backup on your server is going to use this ServerLabel instead of the server name.  Here’s what the failover scenario above looks like with this new ServerLabel.

 

Before Failover:

Full backup – \\BackupNAS\SQLBackups\AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB

After Failover: Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB

Fail Back to Original Node:

Diff backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB Log backup – \\BackupNAS\SQLBackups\ AGListener1\MyDB

Problem solved.  Now no matter which node you backup on, the files go to the same location.  And why not?  They’re the same database aren’t they?  So why complicate things by having them go to separate locations just because your AG failed over.

And it’s not just for AGs.  You can use a ServerLabel for any server you like.  Say you want to associate it with the DNS name of the server instead, or the application name.  That’s just as easy.  ServerLabel is here to give you a customizable server name to use in your path.

And it doesn’t stop there.  You can setup a ServerLabel for specific databases on a server, or even for specific backup types or backup types for specific databases.  It’s very flexible.

It’s such a tiny, unassuming feature, but it can have tremendous effects on your backup scenario.  You can see this http://midnightdba.itbookworm.com/Video/Watch?VideoId=428, and our other Minion Backup videos at http://midnightdba.itbookworm.com/Minion/Backup.

19 Things you didn’t know about Minion Backup

I thought I’d have a little fun here.

  1. The basis for Minion Backup has been used for years by the MidnightDBA team at various shops. And while it was the inspiration for the current iteration of Minion Backup, the previous non-commercial versions seem so poorly written Sean considers them an embarrassment and they will never see the light of day again.
  2. There are portions of Minion Backup that were completely re-written several times as different things came about.
  3. The hardest feature to write was the Data Waiter. It was re-written several times before a workable version was found.
  4. The Minion Backup suite contains 14,290 lines of code.
  5. The features in the Minion suite follow a pattern. A feature is released in one product, and then it gets rolled out into the other products. Then another product gets a new feature that in turn gets rolled out into the other products. So a single product is used as a pilot for a single feature.
  6. Our service packs also follow a pattern. Whenever we release a service pack someone reports a bug that would’ve been easily fixed. It doesn’t matter how long we wait. The new bug report will come within a week after release.
  7. We didn’t write Minion Backup for the community. We wrote it for ourselves. We just released it to the community because we knew you’d love it as much as we do.
  8. While it’s honestly impossible to nail down any one thing, Sean thinks the most useful feature of Minion Backup is the BackupPaths table. However, the feature he’s the most proud of writing is Dynamic Tuning.
  9. The feature Jen thinks is the most useful is the pre/post code. And the feature she’s the most proud of is the fact that Minion Backup keeps track of files as they’re moved or copied and even keeps them in the delete rotation.
  10. We don’t have a voting system for feature requests. If even one person requests a feature, we’ll put it in if it’s a good idea.
  11. We usually don’t add features in service packs, though we’re starting to change that policy. Sometimes there’s just no reason to wait.
  12. We seek large customers, or customers with edge case scenarios to perfect our features. We’ve got power users for almost every aspect of the product and we go to them for enhancement ideas and bug fixes.
  13. We spend more time supporting Minion Backup than we do any other product. Not because it has more bugs, but because it’s so popular and so configurable. Most issues are configuration related. And we try to document issues like this better, so that means even more documentation.
  14. We feel we’ve already overloaded users with too much documentation. But the answers are almost always there if you just look. And while it’s too much for most, someone always appreciates that level of documentation.  But yeah, even we think it’s a lot.
  15. There were times we were so frustrated with getting a specific feature to work properly we almost scrapped the project completely. Thankfully it was just a momentary tantrum.
  16. Not a single feature idea was borrowed from another product. Everything was something we wanted to do. We have had a few users suggest features or enhancements that made it in.
  17. People are starting to teach Minion Backup sessions at user groups and conferences. What a great compliment to our product.  We honestly never expected that.
  18. We never even thought about charging for Minion Backup. It was always going to be a free tool.  And even though it’s been suggested to us a number of times that it’s ridiculous for us to put so much effort into a free tool, we still have no plans for it.
  19. Most of our feature ideas didn’t occur until we decided to take it public. That seems to contradict #7 where I said we wrote it for ourselves. It kind of happened hand in hand. We decided to take it public, but then we started viewing ourselves as the public and asking ourselves what features we’d want and all the different scenarios we’ve been in over the years. We wanted to cover every single one of them. And we wanted to make it as easy and flexible as possible. This is what proved to be the most difficult.

There you go folks, our Minion Backup trivia.

Improved #MinionBackup and #MinionReindex – new versions!

We released Minion Backup 1.1 and Minion Reindex 1.2 last week! We’ve got a some great new features, and a number of bug fixes.

New features in brief: Minion Backup can now back up to NUL. Minion Reindex has improved error trapping and logging, and new statement prefix and suffix options!

minion backupMinion Backup 1.1

The one page MB Highlights PDF is a good place to start, if you haven’t laid hands on our backup solution yet. That’s just

New feature: You can now take NUL backups, so you can kick start your backup tuning scenario.  For more information, see the section titled “About: Backing up to NUL”in the official product documentation on www.MinionWare.net/Backup/

 Issues resolved:

  • Fixed mixed collation issues.
  • Fixed issue where Verify was being called regardless of whether there were files that needed verifying.
  • Data Waiter port wasn’t being configured correctly so there were circumstances where the data wasn’t being shipped to the other servers.
  • Greatly enhanced Data Waiter performance. Originally, if a server were down, the rows would be errored out and saved to try for the next execution.  Each row would have to timeout.  If the server stayed offline for an extended period you could accumulate a lot of error rows waiting to be pushed and since they all timed out, the job time began to increase exponentially.  Now, the server connection is tried once, and if the server is still down then all of the rows are instantly errored out.  Therefore, there is only one timeout incurred for each server that’s down, instead of one timeout for each row.  This greatly stabilizes your job times when you have sync servers that are offline.
  • Fixed an issue where the ‘Missing’ parameter wasn’t being handled properly in some circumstances.
  • Fixed issue where Master was discarding differential backups in simple mode.
  • Fixed issue where Master wasn’t displaying DBs in proper order. They were being run in the proper order, but the query that shows what ran wasn’t sorting.
  • Master SP wasn’t handling Daily schedules properly.
  • Reduce DNS lookups by using ‘.’ when connecting to the local box instead of the machine name which causes a DNS lookup and could overload a DNS server.
  • SQL Server 2008 R2 SP1 service consideration. The DMV sys.dm_server_services didn’t show up until R2 SP1.  The Master SP only checked for 10.5 when querying this DMV.  If a server is 10.5 under SP1, then this fails because the DMV isn’t there.  Now we check the full version number so this shouldn’t happen again.
  • Master SP not logging error when a schedule can’t be chosen.
  • Situation where differentials will be errored out if they don’t have a base backup. Now they’ll just be removed from the list.
  • HeaderOnly data not getting populated on 2014 CU1 and above. MS added 3 columns to the result set so we had to update for this.
  • Increased shrinkLog variable sizes to accommodate a large number of files.
  • Fixed international language issue with decimals.
  • Push to Minion error handling improved. There were some errors being generated that ended SP execution, but those errors weren’t being pushed to the Minion repository.

More resources:

Minion Reindex 1.2minion reindex-02

If you’re new to Minion Reindex, take a look at the one page MR Highlights PDF to get an idea of what we’ve done with a “simple little index maintenance routine”.

New features:

  • Error trapping and logging is improved. Minion Reindex is able to capture many more error situations now, and they all appear in the log table (Minion.IndexMaintLog).
  • Statement Prefix – All of the Settings tables (Minion.IndexSettingsDB, Minion.IndexSettingsTable) now have a StmtPrefix column. See the documentation on www.MinionWare.net/Reindex/ for details. Note: To ensure that your statements run properly, you must end the code in this column with a semicolon.
  • Statement Suffix – All of the Settings tables (Minion.IndexSettingsDB, Minion.IndexSettingsTable) now have a StmtSuffix column.  See the documentation on www.MinionWare.net/Reindex/ for details. Note: To ensure that your statements run properly, you must end the code in this column with a semicolon.

Issues resolved:

  • Fix: Minion Reindex failed when running on BIN collation.
  • Fix: Help didn’t install if Minion Backup was installed.
  • Fix: Minion Reindex didn’t handle XML and reorganize properly.
  • Fix: ONLINE/OFFLINE modes were not being handled properly.
  • Fix: XML indexes were put into ONLINE mode instead of OFFLINE mode.
  • Fix: Situation where indexes could be processed more than once.
  • Update: Increased Status column in log tables to varchar(max).
  • Fix: Status variable in stored procedures had different sizes.
  • Fix: Wrong syntax created for Wait_at_low_priority option.
  • Fix: Reports that offline indexes were failing when it’s set to online instead of doing it offline.

More resources:

Get into our Tuesday precon at the PASS Summit

I'm Speaking Graphic_LargeWe’re just two weeks away from the PASS Summit in Seattle,  and there is most definitely still time to get into our Tuesday pre-conference session, “The Enterprise Scripting Workshop“. That’s a full day of training with 100 of your closest friends* for just $495.

SESSION DETAILS

Abstract:

The database administrator (DBA) life can be frustrating: You rarely have time to innovate because the same tasks fill up your time day after day. Your users are unhappy about how long it takes to resolve “simple” tickets. You need to put big items on hold to manage special requests. As careful as you are, mistakes creep in the busier you get.

In this pre-conference workshop, learn how to develop enterprise scripts with a huge range of uses. A good set of reusable scripts can reduce task time from hours or days to just a few minutes, and eliminate mistakes from your environment.
• Enterprise philosophy: Tackle simple tasks with the whole environment in mind.
• Single data store: Define the benefits and uses of a single central database for common-use data and metadata.
• Choice of tools: Choose the best tool (e.g., PowerShell, T-SQL, SSIS) for the job.
• Environment ground work: Prepare your environment for enterprise scripting.
• Real-world scripts: Work through dozens of enterprise scripting issues (e.g., alerting, error handling, multiple SQL versions) as you develop a real enterprise script in class

This session is for DBAs with a basic understanding of PowerShell. It’s for anyone who touches backups or security, maintains databases, troubleshoots performance, monitors disk space, or any of a hundred other DBA tasks. Enterprise scripting is for anyone who has more tasks than time.

Session Title:      The Enterprise Scripting Workshop

Session Code:    DBA-298-P

Session Date:     10/27/2015

Session Room:  6A

PRE-CONFERENCE SCHEDULE:

7:30 – 8:30 Continental Breakfast
8:30 – 10:00 Pre-conference Sessions
10:00 – 10:15 Refreshment Break
10:15 – 12:00 Pre-conference Sessions
12:00 – 13:00 Lunch
13:00 – 14:30 Pre-conference Sessions
14:30 – 14:45 Refreshment Break
14:45 – 16:30 Pre-conference Sessions

*May not be exactly 100. Might not be actual closest friends, yet.

Sep 17 #pass24hop session: The Enterprise Scripting Workshop

On September 17 7:00GMT, I’ll be giving a sneak preview of our PASS Summit precon, The Enterprise Scripting Workshop, for 24 Hours of PASS. Here’s the registration link.

PASS_24HOPreview_Speaking_250x250

Abstract:
The database administrator (DBA) life can be frustrating: You rarely have time to innovate because the same tasks fill up your time day after day. Your users are unhappy about how long it takes to resolve “simple” tickets. You need to put big items on hold to manage special requests. As careful as you are, mistakes creep in the busier you get.

This is a preview of the PASS Summit pre-conference session. In the pre-conference workshop, learn how to develop enterprise scripts with a huge range of uses. A good set of reusable scripts can reduce task time from hours or days to just a few minutes, and eliminate mistakes from your environment.• Enterprise philosophy: Tackle simple tasks with the whole environment in mind.
• Single data store: Define the benefits and uses of a single central database for common-use data and metadata.
• Choice of tools: Choose the best tool (e.g., PowerShell, T-SQL, SSIS) for the job.
• Environment ground work: Prepare your environment for enterprise scripting.
• Real-world scripts: Work through dozens of enterprise scripting issues (e.g., alerting, error handling, multiple SQL versions) as you develop a real enterprise script in class

This session is for DBAs with a basic understanding of PowerShell. It’s for anyone who touches backups or security, maintains databases, troubleshoots performance, monitors disk space, or any of a hundred other DBA tasks. Enterprise scripting is for anyone who has more tasks than time.

Coming soon: Minion Backup, featuring table based scheduling!

minion backup
The MidnightDBA team is announcing the release of a new, free backup solution for SQL Server: Minion Backup arrives on June 1!

Minion Backup by MidnightDBA is a stand-alone database backup module.  Once installed, Minion Backup automatically backs up all online databases on the SQL Server instance, and will incorporate databases as they are added or removed.

We created Minion Backup (or MB, for short) to be the most flexible, feature-rich backup solution possible. Our goal for this initial release was to include functionality for as many possible backup scenarios as possible. We’ve included certificate backups, HA and DR awareness, restore script generation, “what if” functionality for deletes, the ability to run a batch for “missing” backups, built in manual runs, rollup and detail data in the backup logs, the ability to deactivate most settings, copy / move / stripe / mirror backup files, etc.

Table based scheduling

While there are about fifty features I’d like to talk about, I’m going to restrain myself (today) and talk about the one feature I’m most excited about (today): table based scheduling.

When Minion Backup is installed, it creates a single backup job that runs the master backup stored procedure every 30 minutes.  That master procedure checks the Minion.BackupSettingsServer table to determine what backups should be run for the current day and time.

By default, Minion Backup comes installed with the following scenario:

  • Full system backups are scheduled daily at 10:00pm.
  • Full user backups are scheduled on Saturdays at 11:00pm.
  • Differential backups for user databases are scheduled daily except Saturdays (weekdays and on Sunday) at 11:00pm.
  • Log backups for user databases run daily as often as the backup runs (every 30 minutes).

Let’s look at just a few of the columns of this default scenario in Minion.BackupSettingsServer:

ID DBType BackupType Day BeginTime EndTime MaxForTimeframe Include Exclude
1 System Full Daily 22:00:00 22:30:00 1 NULL NULL
2 User Full Saturday 23:00:00 23:30:00 1 NULL NULL
3 User Diff Weekday 23:00:00 23:30:00 1 NULL NULL
4 User Diff Sunday 23:00:00 23:30:00 1 NULL NULL
5 User Log Daily 00:00:00 23:59:00 48 NULL NULL

I’m not going to fully document this table here – I’ll be happy to send you a draft of the product documentation if you can’t wait for the release date – but you get an initial impression of how flexible this scenario can be, especially in conjunction with other settings tables. I will note that “Include” and “Exclude” allow comma delimited lists of databases (and/or LIKE operators) to include in, or exclude from, the particular backup scenario; a value of NULL means that all databases are included.

This is how MB operates by default, to allow for the most flexible backup scheduling with as few jobs as possible.

Table based scheduling presents multiple advantages:

  • A single backup job – Multiple backup jobs are, to put it simply, a pain. They’re a pain to update and slow to manage, as compared with using update and insert statements on a table.
  • Fast, repeatable configuration – Keeping your backup schedules in a table saves loads of time, because you can enable and disable schedules, change frequency and time range, etc. all with an update statements. This also makes standardization easier: write one script to alter your backup schedules, and run it across all Minion Backup instances (instead of changing dozens or hundreds of jobs).
  • Mass updates across instances – With a simple Powershell script, you can take that same script and run it across hundreds of SQL Server instances at once, standardizing your entire enterprise with ease.
  • Transparent scheduling – Multiple backup jobs tend to obscure the backup scenario, because each piece of the configuration is displayed in separate windows. Table based scheduling allows you to see all aspects of the backup schedule in one place, easily and clearly.
  • Boundless flexibility – Table based scheduling provides a stunning degree of flexibility that would be very troublesome to implement with multiple jobs. With a single backup job, you can schedule all of the following:
    • System full backups three days a week.
    • User full backups on weekend days and Wednesday.
    • DB1 log backups between 7am and 5pm on weekdays.
    • All other user log backups between 1am and 11pm on all days.
    • Differential backups for DB2 at 2am and 2pm.
    • Read only backups on the first of every month.

…and each of these can also use dynamic backup tuning, which can also be slated for different file sizes, applicable at different times and days of the week and year.

and each of these can also stripe across multiple files, to multiple locations, and/or copy to secondary locations, and/or mirror to a secondary location.

Like I said, there are a zillion and a half more things I’d like to talk about, but we’ll keep it right here for now. Reply below, email, or ping @MidnightDBA on Twitter with questions or comments. And keep an eye out on June 1!

 

Check out Minion Enterprise, our new enterprise management solution for centralized SQL Server management and alerting! 

P.S.  Anticipating a few FAQs (and I’ll add to this as things come up):

  • Yes, you can change how often the backup job runs. If, for example, you only want log backups to run hourly, set your job to run hourly.
  • Yes, absolutely, you still have the option to use the more traditional “multi job” backup scheduling. You’d just disable the single job mentioned above, and configure the new jobs with individual schedules and a parameterized master query. Easy.
  • The Include and Exclude fields aren’t the only way to include and exclude databases, but we’re not going to get into that just now.