XP_CmdShell isn’t Evil

I’ve been hearing it more and more the past year.
“XP_cmdshell should always be turned off.”
“Whatever you do, don’t turn on XP_cmdshell!”
“We can’t do that, it requires XP_cmdshell!”
“You’ll fail your audit if XP_cmdshell is turned on.”
And all the other variations.

And I suppose I’ve been hearing it more and more lately because Minion Reindex requires it and Minion Backup will require it even more so.

However, I’ll tell you I’m getting pretty tired of hearing it so true to my blog I’m going to rant.
XP_cmdshell has been around forever. And way back in the day, like 15-20yrs ago, it was installed wide open to the public. This is where the problem started. This was back in the day when SQL’s GUI allowed way too many people who had no idea what they were doing to create and manage DBs. That ease of use was a huge part of SQL Server taking hold in the industry. However, with the product being that easy to use, a lot of these untrained DBAs had no idea XP_cmdshell was even there, so their instance was completely vulnerable and they didn’t even know it. Honestly, this was Microsoft’s fault. They should never have packaged up something that dangerous completely open to the public. But you know what, back then they were also installing sa with a NULL password by default too. And Oracle had their scott\tiger username\password combo, so MS wasn’t the only one doing dumb security back then.

However, now XP_cmdshell comes turned off and when you enable it, it’s not open to public anymore. So seriously, what are you still afraid of? I understand that you used to be scared of it because there was no way to lock it down back then. In fact, Microsoft didn’t provide a way to lockdown XP_cmdshell until somewhere in the neighborhood of version 4.2. So back when it was open to public I can see how writing a DENY statement would be really taxing to you as a DBA.
But these days you don’t have any excuses. You have to go out of your way to open it up to public. XP_cmdshell is still really useful and I’m personally able to create many excellent solutions using it… things that would be much more difficult otherwise. And do you know what I tell people who tell me how dangerous it is? I ask them why they don’t lock it down.

Think about it… there are many dangerous features in SQL. And they’re all kept in check by controlling permissions to them. You don’t see anyone screaming that those other features should be allowed on the box because they just say, we use it but we keep its usage controlled pretty tightly. So why doesn’t that apply to XP_cmdshell? Do you think that SQL all of a sudden forgets how to deny execute perms when that gets called? Do you think that SQL honors all security except that one? Do you think XP_cmdshell is powerful enough to override SQL security and just do what it wants anyway?
Of course not. So what are you afraid of?

The truth is that XP_cmdshell can do a lot and in the wrong hands it can make a royal mess of things. Then again so can DELETE and UPDATE. So can SHUTDOWN. So can CLR. So can DROP DATABASE. So can Dynamic SQL. And you don’t see anyone saying that all of those should never be allowed on any server for any reason. And I would honestly venture to say that Dynamic SQL has been the cause of far more security breaches than XP_cmdshell ever has. I don’t have any numbers to back me up, but I bet if you look at the number of security issues caused by XP_cmdshell, they’re far out-weighed by other features.

And it’s not like people have to way to get that functionality just because XP_cmdshell is disabled. There are still cmdline job steps and cmdline SSIS tasks. And of course, you’ve got CLR. All of which can be just as dangerous as XP_cmdshell yet they run on systems all the time. And I know what you’re thinking… “But Sean, we control those through permissions so they can’t do anything really bad.” Yeah, so you’re making my point for me. But do you think that if an SSIS guy wanted to do something bad to your box that he couldn’t find a way if he weren’t locked down? Of course he could.

The cool thing about the cmdline task in Agent jobs is that they can be run via proxy. You can setup a proxy user to run that step under so that its Windows perms are limited and it can’t run haywire. You wanna hear a secret? There’s a built-in proxy mechanism for XP_cmdshell too. I could tell you how to do it, but DatabaseJournal has already done such a fine job. So here’s the link to setting up the cmdshell credential.

I don’t want you to just turn on XP_cmdshell on all of your systems for no reason. But I don’t want you to completely rule it out as a solution just because you’re afraid of it. Tell your Windows admins who are afraid of it to mind their own business and stick to what they know. You’re a DBA and it’s time for you to take back your SQL instances. Lock them down. Don’t be afraid to use cool functionality because so many people refused to read the documentation 20yrs ago. You know better now. So go out there and do the right thing. Lockdown XP_cmdshell, but use it.

Replication Error: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint.

I was initializing a transactional replication with a snapshot today and I got the following error:

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint “FK_Table_xxxx”. The conflict occurred in database “DestDB”, table “dbo.MyTable”, column ‘MyColId’. (Source: MSSQLServer, Error number: 547)

This stopped the snapshot from being applied.
I searched for that FK in the DB and it didn’t exist… not on that table, not anywhere.

I’m going to keep this short. The answer is that SQL keeps a copy of FKs in dbo.MSsavedForeignKeys.
So if you get this error, you can go to the subr DB and delete the row from that table and you should be fine.
I had the problem where it kept finding new FKs to error on.
I finally just deleted all the rows out of that table and everything was fine after that.
The actual delete stmt I ran was:
delete dbo.MSsavedForeignKeys where constraint_name = N’FKName’

I hope this saves you a lot of looking around.

Minion Reindex 1.1 Release (and re-release)

We released Minion Reindex version 1.1 last week. The changes are fairly small:

  • Made sure Minion Reindex handles all nonstandard naming (e.g., object names with spaces or special characters).
  • Added support for Availability Group replicas. (Basic AG support has been added by only permitting Minion Reindex to run on an AG Primary DB.)
  • Fixed formatting in Minion.Help stored procedure.
  • If you run the installation script in a database other than master, this will now automatically be reflected in the Minion Reindex jobs. (The documents still say that you have to change this manually…I need to update that…)

But, and this is important, if you downloaded Minion Reindex 1.1 before this blog post hit, you need to re-download it. Yep, Jen messed up. She posted an earlier, incomplete version of the 1.1 package, and so introduced a syntax error.  As of now, this is fixed. (Her delicious dish of crow is in the oven, almost ready to eat.)

We timed the release to coincide with Grant Fritchey’s review of Minion Reindex, which has sparked a fantastic discussion in the comments.

So, once you download the fresh, non-syntaxy version of Minion Reindex 1.1, you can chime in and let us know how you feel!

 

Video: A Better Way to Reindex

minion reindex-01Edit: Corrected the recording link.

Yestreday I taught “A Better Way to Reindex” for the PASS Performance Virtual Chapter (event link, with recording).

The recording is up, if you missed the live event, or if you want to re-watch it:  https://attendee.gotowebinar.com/recording/8483181173057914370

And by the way, the demo that fails (there’s ALWAYS one demo that fails) is due to the fragmentation routine we use, fragmenting the data way more than usual. All we had to do was to increase the reorg threshold, and it would’ve been fine. So for the record, it’s the fault of the test harness, not of Minion Reindex.

Here’s the abstract:

A BETTER WAY TO REINDEX

Let’s play a guessing game: I guess that you don’t want to spend time on index maintenance, but you know your servers need it. You want something that’s straight plug and play, dependable, and preferably free. I also guess that you don’t want to manage multiple jobs for this one maintenance task, even when you have exceptions and special settings. Am I close on this? Come and take a look at Minion Reindex. It does all those things that you want for index maintenance, plus some really innovative stuff. Watch your reindex progress live. Configure settings and exceptions for individual databases or tables. Gather fragmentation stats separately, outside your maintenance window. There’s quite a lot more so come see how you’ll make your index maintenance a lot easier.

Of course, you should go and download Minion Reindex yourself, and see the documentation, and follow along with all the fun stuff I do in the session.

Allow_Page_Locks for Reorgs

minion reindex-01There are many settings that get set one way or another in DBs and in tables. Allow_Page_Locks is one of them that you may not be able to do anything about because your vendor may require it and your situation may require it as well. Normally it’s set to true, but it does get set to false and when it does, it typically needs to stay that way.
The problem is that when you reorganize these indexes that have allow_page_locks = false, then the reorg will fail. But the problem is that if you change it, you may see increased blocking issues. So what is there to do?

Well, the answer is Minion Reindex. We allow you to define pre and post code at the table-level that you can use to switch this option on and then off again when the table is through. And better yet, we even give you the code to discover all of these issues in your database and fix them. In the Minion Reindex download folder you’ll find a Queries folder. This folder has a sql file that you can run and it will insert the table-level exceptions with the proper pre/post code. The precode sets allow_page_locks = true and the postcode sets allow_page_locks = false. This way you can still have the setting the way you need it, and perform your index maintenance too.

And of course, Minion Reindex is completely free so download it now and you won’t be sorry.

129 of 635

This stat is one of the coolest things you’ll see in Minion Reindex.
FragStats

Gathering fragmentation stats in large DBs can take a long time and you have no insight into what’s going on. With Live Insight, Minion Reindex allows you to see what’s going on every step of the way, including how many indexes you have left to gather stats on. And we even give you the name of the table and index that’s currently having its stats gathered. Our reindexing ops aren’t black boxes. I use this almost every day and I honestly don’t know what we ever did without it. This feature is turned on by default, but in case you’ve accidentally turned it off just make sure that for the current DB you have LogProgress = 1 in the Minion.SettingsDB table.
However, it’s better than that. You may not be interested in Live Insight for every table. In this case you can turn it off for certain tables, or turn it off for an entire DB and then turn it on only for certain tables. This is just one of the many useful features we have in our log.
And don’t forget that you can configure at the DB level specifically or use the default DB settings by configuring the MinionDefault DB in the SettingsDB table.

I hope this helps you configure and use Minion Reindex better.

Order by Index Usage in Minion Reindex

Last week, a user asked if he could order index maintenance operations at the table level, based off of index usage. And of course, the answer to that is definitely yes. We designed Minion Reindex to have flexibility to address custom preferences like this without a lot of hassle, and with no extra jobs.

Minion Reindex already allows table ordering: you can give individual tables a heavier ReindexGroupOrder “weight” (higher numbers are reindexed first). All you have to do is change the order based off of your criteria. We’ve written a script –  MinionReindex-OrderTablesByUsage – that updates the ordering for all tables in a database, based off of usage. You’re welcome to customize this script and change the criteria to whatever you like.

A couple notes on the script:

  • Read the introductory comments for instructions and notes. And of course, review the script well.
  • This might have been easier with a MERGE statement, but we wanted it to be compatible with all versions that Minion supports. The script does an insert or an update based on whether that table is already listed in the “Minion.IndexSettingsTable” table.
  • This script can clearly be modified to meet any number of needs, like ordering tables by row count, or even excluding tables that don’t have enough reads.
  • To use this script, you can either add a job step, or encapsulate it in a stored procedure and add it as a DBPreCode in the Minion.SettingsDB table. It’s that easy.

Feel free to use this script or alter it as you need. And if you make it do something really cool, send it back to us and we’ll add it to the community of scripts on the site. We’ve done some initial testing of this script, but nowhere near the level of testing we put the product through so if you find any bugs let us know. MinionReindex-OrderTablesByUsage

Download Minion Reindex at MidnightSQL.com/Minion

Minion Reindex arrives this Friday!

The MidnightDBA team is announcing the release of a new, free index maintenance solution for SQL Server: Minion Reindex by MidnightDBA.  We designed Minion Reindex to solve all of the reindexing headaches we’ve experienced at dozens and dozens of client sites.

The Bottom Line

Your databases need index maintenance for optimal performance, and most shops don’t have a decent solution in place.  Minion Reindex is extremely simple to implement, and deeply configurable. Junior DBAs will love the ease of use, and expert data professionals will love the extensive settings and features available.

Read more on the Minion Reindex page:

  • Check out the full features list.
  • Get the download link there starting on Friday, October 24, 2014.
  • Join us for the Minion Reindex webinar Monday, October 27.
  • See documentation and tutorials as they come in!

You can ask questions and follow along with the discussion on Twitter using the #MinionReindex hashtag.

Stay tuned for the best thing index maintenance has ever seen…

 

Priority vs. Weight

Ok, so this is actually Sean writing this time.
I thought I’d hit the ground running with a great topic that came up just this morning.

The issue is how do you prioritize tasks in code? So let’s say you’re going to process a list of DBs for processing, but you also want to do them in a specific order. You’ve got 2 choices really when deciding on how to do this: Priority and Weight.

Priority:
This one is probably the most common. This is say when you have a list of objects and you rank them from say 1-10. In this case 1 is the top priority with 10 being the least. This is how a priority system works. Items move up the list to the top position.

Weight:
A weight system works just the opposite way. You give more weight to items by giving them a higher number. This extra weight makes them more important so they get processed first. So in the 1-10 list, 10 would be processed first because it has more weight.

So what’s the difference between these? Well a priority system I think is easier with fairly static priority code. You’re not going to be making any real changes to the priorities or the objects very often.

A weight system is better when your list and priorities are more dynamic.

Let’s take a look at an example:
You want to process some DBs for something. You pull in the list of DBs and you want them in a specific order. So you put them in #DBs with an Ordering col.
If there won’t be any new DBs in the list for a long time, and you’re not likely to change the order, then a priority system is good for you. However, if you’re going to be adding more DBs, and those DBs will possibly even be higher priority than the current ones, then a weight system is best. The reason is that if you’ve already got the number 1-10 populated, and something new pops up in the list that’s higher priority than anything else, you’d have to set that to 1 and then shift all the others down by 1. So what used to be 1 is now 2, what used to be 2 is now 3, etc. However, with a weight system, you can add as many new items as you like, and you just keep increasing the number. So in this case the new item would be assigned 11 because it’s more important than anything else and has more weight.

Of course if you have to arrange the weights of any of the lower ones it does become just a bit more tedious. In this case you’ll have possibly make another sub-group column. This way you can keep your major groups and then just change priorities or weights in the sub group. So let’s say you’ve got DBs grouped from 1-100 using either method, but we’ll say it’s by weight for now. If you have to make one DB a higher weight by one you’d have to shift everything else up one.
However, if you have that DB in a major group of say 10 DBs, and they all have weights within that group then you only have to change the weights within that one group. So whereas you would before have had to change the weights of 30 DBs, now you only have to change under 10.

I tend to use a weighted system more, but I have nothing against a priority system. And I prefer the 2 group system even more.

Well Wasn’t that FUN!!!

Imagine my surprise this evening when Jen got a txt from Denny Cherry asking if we were ok after my latest blog post here. I said, what blog post?
Apparently my account has been hacked and someone thought they were being funny. However, looking at the wording in that post, it sounds like a bot to me.

So anyway, I’m fine, nothing has happened, except I about broke a rib running over here to change my password.

Instead of working, I blog.