Tag Archives: Maintenance

Truncating your Log Files

I want to show you a great piece of code to truncate and shrink all your log files. The biggest question you always ask is why should you shrink your log files? It’s been drilled into everyone’s head that shrinking log files is bad because you can cause too many VLFs, and of course there’s the zeroing out that happens when they grow.
OK, so let’s answer that question. There are a couple reasons you’d want to shrink all the files to a small size.
1. Let’s say you’ve got to move some logs to another drive, or all of the logs. If left unchecked your logs may have occasion to get rather big. So you could honestly have some pretty large log files lying around. And when you’ve got a downtime to move these files, you don’t need to extend it by copying really large (yet mostly empty) log files. So it’s best to shrink the files so you can copy them over quickly.
2. Also, if you’re trying to restore a prod DB to dev or QA and you don’t have the same kind of space over there then you’ve got to shrink the file. Why try to restore a 90GB log file that’s only 1% full? So shrink the log, take the backup, and then it’ll restore.
3. And of course if you’ve got several logs on the same drive and one of them gets blown up to an enormous size, it may leave the drive too full for the others to expand. There’s no reason to keep a 50GB or 90GB file mostly empty just because you’re afraid of a VLF problem you probably won’t even have.

So there are 3 reasons you may need to truncate log files. Now here’s the script to actually make the change. It’s simple but it gets the job done.
This is one of those times when a cursor is the perfect solution for the job. I know it’s popular to bad-mouth cursors but a lot of times they’re perfect for this type of admin work. However, on top of the cursor solution there’s also a different way to do it. The cursor solution will allow you to run the code for each DB. You can also add in some error handling, and other logic if you like. So ultimately it can be built into a more complete solution. But I’m a big lover of code that writes code. So first, I’m going to show you a simpler solution using string building in a query.

SELECT 'USE [' + DB_NAME(database_id) + ']; ' + 'DBCC SHRINKFILE([' + name + '], 1024)'
FROM sys.master_files
WHERE type_desc = 'LOG'
AND database_id > 4
ORDER BY DB_NAME(database_id) ASC

You can see that it’s a simple script that gets the job done. If you’re running it from PowerShell you can run the results in your PS loop with no problems. You could even write the results to a table and use some other automated process to pick up the statements. Or, in cases like this, my favorite is to just select and run them by hand.

Now for the cursor version. Again, you can put in lots more logic in here and you have more control over the entire process in general. So they’re both worthy methods to have.


-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
---------------------Truncate and shrink all Log Files-----------------------------------------
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------

/*
The biggest question I get with this is why you would want to shrink all log files.  There's a question of VLFs and log growths, etc.
Well, a simple use case for this is when you need to move a bunch of log files to a new drive and you don't want to be up all night.
Shrink them down and transfer just a few gigs instead of a few dozen or even into the hundreds of gigs.

Another reason is to restore a DB to a dev box or something.  If the drive isn't as big as it is on your main box then you'll need to shrink the log so you can actually restore.
Then take the backup.

So the fact that it may be good to leave your logs alone for the most part, there are times when it's best to trim them.
Of course, the obvious other reason is space.  If you've got a lot of log files on a single drive then you need the space to be managed a little tighter and if you've got one that got
blown out really big for some reason then there's no reason the others have to suffer because you refuse to shrink it.

*/


DECLARE
@curDBName sysname,
@curFileName VARCHAR(2000),
@SQL varchar(4000),
@FileSize VARCHAR(10);

SET @FileSize = '1024'; -- The size you want the files to be shrunk to.

Declare DBName Cursor For
	SELECT DB_NAME(database_id) AS DBName, name AS FileName 
	FROM sys.master_files
	WHERE type_desc = 'LOG'
	AND database_id > 4
	ORDER BY DB_NAME(database_id) ASC


Open DBName 
Fetch Next From DBName INTO @curDBName, @curFileName

while @@Fetch_Status = 0

	Begin

		SET @SQL = 'USE [' + @curDBName + ']; ' 
		SET @SQL = @SQL + 'DBCC SHRINKFILE ([' + @curFileName + '], ' + @FileSize + ')'

PRINT @SQL;
--EXEC (@SQL);

Fetch Next From DBName INTO @curDBName, @curFileName
END

Close DBName
DeAllocate DBName

GO

A Very Heated Argument about Backup Tuning in Minion Backup

A couple weeks ago we here at MinionWare got into a very heated argument that lasted most of the morning and part of the afternoon. The argument was around the backup tuning settings in Minion Backup (MB), and how they should work vs. how they actually work.
The problem came about because Jen was doing some testing for her first MB session at a user group. She came across an issue with the tuning settings when she added the time component to the Minion.BackupTuningThresholds table. She noticed that she wasn’t getting the tuning settings she thought she should get when she was trying to tune for a specific time of day. So naturally she assumed I was stupid and filed it as a bug.

In actuality though it’s doing exactly what it’s supposed to, and it’s following the letter of the Minion Backup law. That law is “Once you’re at a level, you never go back up”. Let me show you what I mean.

Precedence in the Tuning Thresholds table

Take a look at this sample Minion.BackupTuningThresholds table.

TuningThresholds

Ok, in the above table we’ve got some tuning rows. This is a truncated version of the table, but it’s all we need to demonstrate precedence. We’ve got two rule sets here; one for MinionDefault (the row that provides all the default configuration settings), and one for MinionDev (a specific database on my server).

  • MinionDefault is a global setting that says unless the DB has an override, it’ll take its rows from here.
  • MinionDev is the only DB on this server that has an override, so it’ll take its settings from the MinionDev rows.

At the most basic level, the precedence rule states that once there is an override row for a database, that database will never leave that level…it will never default back to the default row. So in this example, MinionDev is at the database level for its settings, so it will never go back up to the more generic MinionDefault row. Once you’re at a level, you stay at that level.

A “Zero Row” for every level

I’m going to explain how these rows work, and why they are the way they are. Notice that for both levels (that is, for the MinionDefault rows, and for the MinionDev rows), there is what we call a zero row. This is where the ThresholdValue = 0. The zero row is especially important for the MinionDefault row, because this is what covers all DBs; it’s quite possible that you could get a database that’s less than your lowest threshold value.

In the above table, the lowest (nonzero) threshold value for MinionDefault is 20GB. That means that no DBs under 20GB will get any tuning values. Without any tuning values, the number of files would be NULL, and therefore you wouldn’t be able to backup anything…they wouldn’t have any files. So setting the zero row is essential.

And, since each DB stays at that level once it’s got an override, then whenever you put in a DB-level override it’s an excellent idea to give that DB a zero row as well. It may be 50GB now, but if you ever run an archive routine that drops it below your lowest threshold, then your backups will stop if you don’t have that zero row to catch it. Did I explain that well enough? Does it make sense?

That’s how the rule is applied at a high level between DBs. Let’s now look at how it’s applied within the DB itself.

“Zero Rows” within the database level

As I just stated above, you should really have a zero row for each database that has an override row (you know, where DBName = <yourDBname>).

Let’s look at MinionDev above. It has a BackupType=All set, and a BackupType=Full set. The All set takes care of all backup types that don’t have backup type overrides. So in this case, the All set takes care of Log and Diff backups, because there’s a specific override for Full. Get it? Good, let’s move on.

Notice that MinionDev has a zero row for the All set, and a zero row for the Full set. This is essential because following the rules of precedence, once it’s at the MinionDev/Full level, it doesn’t leave that level. So again, if there’s a chance that your database will fall below your lowest tuning threshold – in this case it’s 150GB – then the backup will fail, because there are no tuning parameters defined below 150GB. This again is why the zero row is so important: because it provides settings for all backups that fall below your lowest tuning setting.

And, if you were to put in a BackupType=Log override for MinionDev, it would also need to have a zero row. I could argue that it’s even more important there because it’s quite possible that your log could be below your tuning threshold.

So now, our Argument

That’s how the precedence actually works in the Minion.BackupTuningThresholds table. The argument started when Jen thought that it should move back up to the All set if a specific BackupType override falls below its tuning threshold. So in other words, in the above table, she wouldn’t require a zero row for the MinionDev-Full set. Instead, if the DB size fell below the 150GB threshold, she would move it backup to the MinionDev-All set, and take the lowest tuning threshold from there.

She said that it wasn’t in the spirit of the precedence rules to make the setting quite that pedantic. So after hours of arguing, drawing on the board, making our case, sketching out different scenarios, etc… we just kinda lost steam and moved on, because she had to get ready for her talk.

The point is though that this is the way it currently works: once it’s at its most specific level, it stays there. So, if you have tuning settings for specific backup types, you’d be really well served to have a zero row for each one just in case.

And I’ll also note that BackupType is the lowest granularity. So, Day and Time (another config option in this table) have nothing to do with this setting. You need to concentrate on the DBName and BackupType. Everything else will fall into place.

Final Caveat: We break the rule (a little)

Now, I know it sounds like a contradiction, but there is just one place where I break this rule. I call it the FailSafe. With the FailSafe, it’s possible to have specific overrides and still get your tuning thresholds from the MinionDefault zero row. Here’s why:

This is a rather nuanced config in Minion Backup, and it’s fairly easy to get something wrong and wind up without a backup. I didn’t want that to happen. So, if you do something like leave your zero row out for an override level, and your DB falls below your lowest threshold setting, you would wind up without any backup because there isn’t a number of files to pass to the statement generator.

Failsafe says, if you screw up and don’t have a tuning setting available, MB will grab settings from the MinionDefault Zero Row.

In this situation, I kick in the FailSafe mechanism, which pulls the tuning settings from the MinionDefault zero row. At least you’ll have a backup, even if it’s slow.

(That was one of Jen’s arguments: that a FailSafe is a great idea, but she wants it to come from the DB-All set instead of the MinionDefault-All set. I don’t know, maybe she’s right. Maybe that’s more intuitive. I’ll have to think about it. It wouldn’t be that big of a change really. I could walk up the chain. In the above table I could try the MinionDev-All zero row and if that doesn’t exist then I could use the MinionDefault-All zero row. What do you guys think?)

So why not just hardcode a single file into the routine so that when this happens you’re backing up to that single file? The answer is: flexibility. Your MinionDefault zero row may be set to 4 files because all your databases are kinda big and you don’t ever want to backup with fewer than that. So, set your MinionDefault zero row to something you want your smallest DB to use. If that’s a single file, then ok, but if it’s 4 or 6 files, then also ok. That’s why I didn’t hardcode a value into the FailSafe: It’s all about giving you the power to easily configure the routine to your environment.

Takeaways:

  1. The precedence rules are followed to the very letter of the law.
  2. Once a database is configured at a level, it stays there.
  3. The configuration level is specific to DBName, and then (at the next most specific level) to the DBName and BackupType.
  4. Whenever you have database-level override row, always have a zero row for it.
  5. Whenever you have a BackupType-level override, always have a zero row for it.
  6. The FailSafe defaults back to MinionDefault Zero Row, if a level-appropriate setting isn’t available.

Ok, that’s it for this time. I hope this explanation helps you understand the reasoning behind what we did.

129 of 635

This stat is one of the coolest things you’ll see in Minion Reindex.
FragStats

Gathering fragmentation stats in large databases can take a long time and you have no insight into what’s going on.

With Live Insight, Minion Reindex allows you to see what’s going on every step of the way, including how many indexes you have left to gather stats on.

And we even give you the name of the table and index that’s currently having its stats gathered.  (Our reindexing operations aren’t black boxes. I use this almost every day and I honestly don’t know what we ever did without it.)

NOTE: This feature is turned on by default. Just in case you’ve accidentally turned it off, just make sure that for the current database you have LogProgress = 1 in the Minion.IndexSettingsDB table.

Live Insight…by specific table

It gets even better. You may not be interested in Live Insight for every table.

In this case you can turn off Live Insight for certain tables, or turn it off for an entire database and then turn it on only for certain tables.

This is just one of the many useful features we have in our log.

And don’t forget: you can configure at the database level specifically or use the default DB settings by configuring the MinionDefault database in the Minion.IndexSettingsDB table.

I hope this helps you configure and use Minion Reindex better.

Order by Index Usage in Minion Reindex

Last week, a user asked if he could order index maintenance operations at the table level, based off of index usage. And of course, the answer to that is definitely yes. We designed Minion Reindex to have flexibility to address custom preferences like this without a lot of hassle, and with no extra jobs.

Minion Reindex already allows table ordering: you can give individual tables a heavier ReindexGroupOrder “weight” (higher numbers are reindexed first). All you have to do is change the order based off of your criteria. We’ve written a script –  MinionReindex-OrderTablesByUsage – that updates the ordering for all tables in a database, based off of usage. You’re welcome to customize this script and change the criteria to whatever you like.

A couple notes on the script:

  • Read the introductory comments for instructions and notes. And of course, review the script well.
  • This might have been easier with a MERGE statement, but we wanted it to be compatible with all versions that Minion supports. The script does an insert or an update based on whether that table is already listed in the “Minion.IndexSettingsTable” table.
  • This script can clearly be modified to meet any number of needs, like ordering tables by row count, or even excluding tables that don’t have enough reads.
  • To use this script, you can either add a job step, or encapsulate it in a stored procedure and add it as a DBPreCode in the Minion.SettingsDB table. It’s that easy.

Feel free to use this script or alter it as you need. And if you make it do something really cool, send it back to us and we’ll add it to the community of scripts on the site. We’ve done some initial testing of this script, but nowhere near the level of testing we put the product through so if you find any bugs let us know. MinionReindex-OrderTablesByUsage

Download Minion Reindex at MidnightSQL.com/Minion