Tag Archives: SQL Server

Restrict SSRS Permissions

This blog is about the fallout from a really bad incident that happened with an SRS server.  And it has a very valuable lesson in there as well.  I was recently talking to a customer about a problem they had getting their SSRS site to load.  As it turns out the problem was that someone had changed the account the service was running under and that broke the encryption for the site.  Here’s how it played out.  They’ve got a need to be able to access shares on remote servers and the account that SSRS was running under was a local account.  So it couldn’t be given permission to any of the shares.  That’s why they changed the service account.  Now here’s what happened.

Initially they were sunk but SSRS was on a VM and they were lucky enough to have backups.  So they restored the backup to another VM and renamed the server so they could get it on the network.  Then they were able to backup the SSRS encryption key and restore it to the original server.  After that, everything worked right away.  So the first takeaway you have here is to always backup your SSRS encryption keys.  Always.  I’ve had this happen a few times and we didn’t have backups of Windows we could restore so we had to re-enter all the passwords for our data sources.  On one box it wasn’t that big of a deal and on another one it was quite a big pain.  So always backup your encryption keys.

That’s not really the bad part though.  When I had them backup their current encryption key, I told them that I really like to keep a naming format for my encryption key backups.  That format is SSRSEncryptionKey-ServiceAcctDate.snk.  And when they were typing their filename, the name they ended up with was this:
SSRSEncryptionKey-DomainAdmin-20170503.snk.

If you paid close attention, you’ll notice the ‘DomainAdmin’ portion of that name.  Yep, you got it right… they were running SSRS under the domain admin account.  The Windows guy thought that it would be too much trouble to manage the permissions and get everything right on all the shares and DBs that it needed to access.

So this is when I pretty much lost it.  These guys were running  SSRS under a domain admin account because they were too lazy to do the right thing.  It’s unthinkable.  There may be some reasonable excuses why you’re not able to change your current security model to something better.  You may even be able to convince me that you’re not just being lazy.  But to actively be lazy about your security isn’t something I’m going to take lying down.  Hey, I know it’s your shop, and I know you can ultimately do whatever you like, but I’m going to make sure you know what you’re doing.

So to those of you out there who are even considering being lazy about security, don’t.  Do what you need to do.  You won’t get it right all the time.  There’s a lot of misinformation out there and there are a lot of pitfalls.  So not getting it completely right is ok.  You do what you can when you can.  but laziness will never be an excuse.

Cannot Generate SSPI Context

I woke up pretty early this morning and decided to get something in my dev environment worked out that I’ve been meaning to do for a long time now. I needed to put my main Minion dev box on a domain acct for SQL. It was currently running under local system. So I switched it to use the domain acct SQLSvc. The second I restarted SQL with the new acct I got the dreaded “Cannot Generate SSPI Context”.

Here’s my fix and I’m hoping it’ll help someone in the future.

First let me say that SSPI errors can be caused by a number of things. This one turned out to be an SPN error, but it doesn’t have to be. Among other things it could be:

  1. DNS lookup.
  2. SQL Server time out of sync .
  3. Client time out of sync.
  4. Anything else that keeps Kerberos from working.

 

Now there are also things that can keep the SPN from being created or used. The account could not have the perms it needs to create the SPN in AD, or there could be duplicate SPNs. You can only have 1. Or you could simply have the wrong SPN. Make no mistake, this issue had caused many gray hairs in many DBAs, and I personally always sigh when it comes up because it can be fixed in 5mins or it can take 4 days and 3 MS support techs getting involved. Most of the time though, it’s more straight forward than that. I’m not going to even attempt to make this a treatise on troubleshooting every aspect of this error though. I’m just going to show you how to diagnose the SPN portion of it and quite often that’s the issue so if it doesn’t work for you it’ll at least eliminate the SPN and you can concentrate your efforts on something else.

OK, that’s enough prelim stuff, let’s get to it.

First, we need to see what SPNs we have on the box. There are 2 ways you can do that depending on what rights you have in the domain. I’m going to show you both of them so you can have a visual guide as well as a cmdline guide.

To see what SPNs you have on the box, go to the cmdline and type the following:

SETSPN –L MinionDevCon

What I’m saying here is to list (-L) the SPNs registered for the SQL box I’m interested in. In this case, MinionDevCon.

See below for the query and the results.

SPN1

Also, notice that the SETSPN command isn’t case sensitive.

So above you can see that I’ve got 2 SPNs for my SQL acct on that box.  Now we just have to delete one of them.  However, before we fix this issue, I’ll show you another way to view the same info.

If you have access to AD, then you can also see the SPNs in the computer object properties.  So go to AD and navigate to the computer object for your SQL box.  Again, in my case this is MinionDevCon.  Then go into the properties of that computer object.  You’ll want to go to the Attribute Editor tab.

SPN2

Now find servicePrincipalName and either double-click it or highlight it and hit Edit.  This will show you the list of SPNs for that computer object.

Here you can see my 2 SPNs just like at the cmdline.

SPN3

Deleting the extra SPN:

Deleting the bad SPN is a fairly straight forward operation.  And again I’ll show you 2 ways.

At the cmdline, simply use the SETSPN cmd again and pass in the delete flag.

SETSPN -D MSSQLSvc/MinionDevCon.MIdnight.DBA:1433 MinionDevCon

So again, that’s SETSPN <SPN> <Computername>
The SPN in the cmd above should be exactly the way it was represented in the list when it was queried.  Here’s the command in the query window so you can see what it looks like:

SPN4

Ahhh, see there, my regular acct doesn’t have rights to do this.  I wanted you to see what happens when you don’t have rights.  You will quite often need to get your AD guys to do this for you.  So if you just prepare the cmd and send it to them they’ll be able to make it happen.  Here’s what it looks like when I log in under my domain admin acct.

SPN5

And again, that cmd is simple.  Call SETSPN with the -D flag, which tells it to delete.  Then the exact name of the SPN followed by the name of the server.

Now let’s do the same thing in AD.  So you’re still in your computer object properties like above.  Simply highlight the SPN you want to drop and then click the Remove button.  It’ll put it in the text box above, but just click OK and it’ll be gone.

SPN6

Ok, that’s it.  I didn’t have to restart my SQL service, but I’ve heard of people who have.

Now, this was just a quick tutorial on how to manage SPNs.  This hole can go pretty deep.  Here’s a decent link on MSDN for troubleshooting SPNs.  I don’t think I like their troubleshooting because they don’t really do a good job of showing you the commands, but it’s a good explanation of the problem, what an SPN is, etc.  If I remember correctly it’ll also help you choose the right SPN.

Order by Index Usage in Minion Reindex

Last week, a user asked if he could order index maintenance operations at the table level, based off of index usage. And of course, the answer to that is definitely yes. We designed Minion Reindex to have flexibility to address custom preferences like this without a lot of hassle, and with no extra jobs.

Minion Reindex already allows table ordering: you can give individual tables a heavier ReindexGroupOrder “weight” (higher numbers are reindexed first). All you have to do is change the order based off of your criteria. We’ve written a script –  MinionReindex-OrderTablesByUsage – that updates the ordering for all tables in a database, based off of usage. You’re welcome to customize this script and change the criteria to whatever you like.

A couple notes on the script:

  • Read the introductory comments for instructions and notes. And of course, review the script well.
  • This might have been easier with a MERGE statement, but we wanted it to be compatible with all versions that Minion supports. The script does an insert or an update based on whether that table is already listed in the “Minion.IndexSettingsTable” table.
  • This script can clearly be modified to meet any number of needs, like ordering tables by row count, or even excluding tables that don’t have enough reads.
  • To use this script, you can either add a job step, or encapsulate it in a stored procedure and add it as a DBPreCode in the Minion.SettingsDB table. It’s that easy.

Feel free to use this script or alter it as you need. And if you make it do something really cool, send it back to us and we’ll add it to the community of scripts on the site. We’ve done some initial testing of this script, but nowhere near the level of testing we put the product through so if you find any bugs let us know. MinionReindex-OrderTablesByUsage

Download Minion Reindex at MidnightSQL.com/Minion

Killing SPIDs in Powershell

Today we’re going to continue our exploration of the server-level in PS.  You’ll remember before we’ve explored it a little bit when we talked about querying cluster nodes.

So today we’re going to kill users in a DB.  This is incredibly easy to do in PS.

First, let’s take a look at the T-SQL counterpart. Pretty much everyone has a script like this.

DECLARE @currUser varchar(100),
		@SQL nvarchar(200)

DECLARE Users CURSOR
FOR select DISTINCT spid from sys.sysprocesses
WHERE [program_name] LIKE '%businessobjects%'

OPEN Users

	FETCH NEXT FROM Users INTO @currUser
	WHILE (@@fetch_status <> -1)
	BEGIN

		SET @SQL = 'KILL ' + @currUser-- + ''
		EXEC (@SQL)
		--print @SQL
		
FETCH NEXT FROM Users INTO @currUser
	END

CLOSE Users
DEALLOCATE Users

And of course, the problem is as it always is in T-SQL. It doesn’t scale to several boxes, and if you find yourself w/o your script, it’s not not really easy to reproduce off the top of your head unless you’re one of those freaks who can write cursors w/o looking at a map. And I don’t know about you, but I quite often find myself w/o my scripts, so I like things that are easier to reproduce.

And like I said, PS scales very well to multiple boxes. Why would you want to run it against more than one box you ask?
Well, let’s say you’ve got a single app server that hits multiple DB servers and you want to kill them all for a maint window. You can’t assume that stopping the service on that app server will kill the DB connections. In fact, it quite often doesn’t. There are other testing scenarios where this could be useful, but I’ll let all of you use it as you see fit. The point is it’s here if you need it.

So to do this in powershell:
Start by connecting to a DB at the server level (either in sqlps or in PS proper… my examples will be in sqlps, but it really doesn’t matter as long as you connect).

In sqlps you can do this by doing a right-click on the servername and going to ‘Start Powershell’.

Then you just need to go up one level like this:

>cd..

Now you’re ready for the command. 

So let’s say you want to drop all the spids connected to SeansDB.  And as usual, there are 2 ways to do this.  I’m going to show you both of them just for completeness.

Method 1:

dir | ?{$_.Name -eq "SeansDB"} | %{$_.KillAllProcesses("SeansDB")}

Now, those of you who know PS, know this is actually quite wasteful. What you’re doing is getting a list of all the DBs, then filtering it down to one and then running the method.
And of course since you call the method with the DB name to begin with this is actually useless. However, it’s still a legal method so I wanted to mention it.

Method 2:You’ve got to think about what you’re doing here so you can make the right decision.  When you do this in sys.sysprocesses, you’re working at the server-level, so the only place individual DBs come into play is with the result set of whatever cursor you write to limit your results. And quite often killing all the spids in a DB can be very useful. So here’s a better way to go about it in PS.

(dir).KillAllProcesses("SeansDB")

This is easy to remember, and easy to type. And of course, it’s obvious what it does… it kills all the spids connected to SeansDB. Now, you can also kill just a specific spid like this:

(dir).KillProcess(69)

And that’s how PS can bring that long t-sql cursor down to almost nothing. What? What’s that? You want more? You want to be able to kill spids based off of application, or CPU, or some other criteria? Well, now you’re just being demanding, but I think PS can do something for you. This is just going to take an extra step and here’s what it’ll look like.
In our example here let’s kill spids by application. So we’ll kill all users who are connecting through SSMS.

$a = (dir).EnumProcesses()
$a | ?{$_.Program -match "SQL Server Management Studio"} | %{$_.KillProcess($_.SPID)

Now, there’s a treasure trove of stuff in here. Let’s take a look at this in detail, especially the EnumProcesses() method.
This method is overloaded so you’ve got some nice functionality built-in.
Here are the documented overloads:

$a = (dir).EnumProcesses() ## Gets all processes.
$a = (dir).EnumProcesses($False) ## Excludes system processes.
$a = (dir).EnumProcesses(69) ## Get info on a single spid... 69 in this case.
$a = (dir).EnumProcesses("domain\user") ## Gets processes run by a specified login.

And now that you’ve got your list of processes, you can do a get-member on them to see what properties are available to you. So remember above when we killed spids by Program? Well, you can kill by anything returned as a property from your get-member. Here’s a screenshot of the results I got from mine.

Killing processes in powershell is so easy it’s almost makes me feel stupid for documenting it. And while you may be used to doing things like this in T-SQL, give this a try and I think you’ll find you like it much better once you get used to it.
And I mentioned that it scales to multiple boxes really well, and it does. I just didn’t show you that here cause it’s pretty easy to figure out.

And DO use this with care. It’s so much easier to kill everything on the box in PS than it is with T-SQL. And I’m not taking any responsibility for how you (mis)use this knowledge.

Head in the Clouds

OK, so a while back I wrote a post about how Oracle doensn’t build as much of a community as MS.  And not only do I stand by it, but I’ve seen quite a few of the replies around the internet and those Oracle guys amaze me even to this day.

They spend an awful lot of time talking about how much better Oracle is than mssql and how much more stable it is and how much more Oracle users expect from their DBs because they tend to be more important than mssql DBs.  Also, Oracle DBs have more users going against them than mssql DBs so more people are affected when they do go down so Oracle DBAs have to be more on the ball because their users expect more uptime.  Whereas mssql DBAs’ users expect more downtime so the DBAs don’t have to hurry as much to get the system back up because that downtime is expected. 

Man, talk about having your head in the clouds.  I can’t believe that in this day and age that people are still so incredibly blind.  Do they really think that mssql has taken the market by storm because there are so many people with little insignificant DBs and they just don’t wanna pay for Oracle on these tiny little things.  It’s not under dispute that Oracle outshines mssql in some areas.  They’ve been around longer and they’ve had more time to bake their product.  But that doesn’t make mssql a slouch either.  I know you guys know this all too well.  Some of the biggest and most important DBs on the planet are on mssql and they require just as much uptime as those super-important Oracle DBs.

To make such statements is not only ludicrous, it’s just childish.  It’s like saying that linux apps are more important than windows apps.  Grow up guys.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

The Clean House

This post was particularly inspired while cleaning my house today.  It’s easy, if you think you’ve got a clean house then just stand up on a ladder some time and change your perspective.  I think you’ll find there are lots of things about your clean house you don’t know.  And while nobody’s officially complaining about the condition of the house (and in fact, everyone thinks it’s in great shape), it clearly is in disarray when you shift to that new bird’s eye view.

The same is true for your DB.  I honestly can’t count the number of times I’ve been told that there was no reason to look at DB performance because nobody was complaining about it.  Then when I dig into it and look at things from a DBA angle, the system is actually abysmal.  So if performance is so bad then why isn’t anyone complaining?  Well there can be a couple reasons for that.  First of all the DB could just not be very busy comparatively so the users won’t notice the issues.  And second, they may have just gotten used to things running slower so again they don’t notice it.  I’ve actually seen that several times.  Things get slower and slower, or better yet, they just start out slow and nobody ever knows the difference.  And what’s even WORSE is that more often than not there’s some cornhole in the background telling them that Oracle would solve all their problems.  Listen, if you can’t write a simple process against mssql and make it run right do you really think you’ll be able to hit Oracle ok?

So I’ve gotten off topic a little… back on track.  The point of all this is that just because nobody’s complaining about performance that doesn’t mean that your system is performing.  I quite often find that companies have no idea what their systems can scale to and they just assume that they’ll be able to handle whatever they throw at them.  And it typically takes more forethought than most companies have to look at a system objectively and proactively fix problems.

So the next time you’re considering your system performance, stand on a ladder or crawl down on the floor and see how things look from there.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

Fairy Repellent

One of my big peeves is devs writing code to protect themselves against things that never happen.

You really see it all the time don’t you?  And it always sounds so reasonable at the time, but it really isn’t if you put any thought into it at all.  Here are a couple examples.

We had a dev a while back to introduced a redesign for a job where he put everything pretty much into the same step.  He had a long SP written that combined xp_cmdshell steps and reindexing steps, etc.  It was all just one long strip of code.  And of course his logic sounded logical on the surface, but not once I started asking the right kinds of questions.  See, his reasoning was that since there’s no way in SQL to run a job step by itself and not the steps that come after it, he needed the ability to be able to run any section of that SP he needed.  Ok, that sounds alright because he’s right.  While you can start a job at any step, you can’t tell it to only run a single step unless you play with the workflow in the steps. 

But one simple question killed the whole thing… when’s the last time you needed to run just a single step in this job?  I assume that the steps that follow are there for a reason, right?  So give me a scenario that this would cover.  And of course it’s always the ethereal… I can’t think of anything right now, but I’m trying to cover all the contingencies.  I get that dude, but you’re coding for issues you can’t even state. 

What about when the job fails?  How often does the job fail? 

He said, oh it probably fails a couple times a week for one reason or another. 

Ok, now how easy is it going to be to troubleshoot the failure if everything’s in just in one huge pile?  Shouldn’t you be coding for the case that happens the most?  And of those times it fails say at step 2, do you ever need to run step 2 and nothing after it? 

He said, No. 

, then why are you coding for it? 

He says, well there might be a time when I need to run a single piece of the code for some reason say redoing something in the middle of the day or something.  It hasn’t happened yet, but it could. 

Yeah sure, that could happen.  And if it ever does what’s stopping you from copying the code from the job step you need to run and just running it manually in SSMS? 

To which he replied… Ummmm….

So ok, this is getting long so I’ll leave this at one example, but you get the point.  It’s a lot like buying fairy repellent for your house, and setting fairy traps everywhere.  When’s the last time you had a fairy problem?

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

 

Technorati Tags: ,

No More Select *

Ok, not ‘no more’, but you guys should seriously limit your usage of select * for everything.  I realize that it’s easier to support when you change the data requirement, but it pushes a lot more data than you need and that could really impact network and server performance. 

Let’s say you’re running a web app and you need 3 cols on your page.  And you pull all 37 because it’s just easier to type a * than each column.  That’s fine from your end, but you could seriously impact the server and the network because if one of those cols is really wide, say varchar(200) (or even 400, right…) then you’re taking up that much extra bandwidth and server memory.  Sure it probably won’t effect your session that much now, but when you’ve got 500 people on your site at the same time you’ll start to feel the pain then for sure.

So just code for performance and stop being so lazy about having to type a few chars.  And if you really don’t like typing that much then get yourself a nice code completer like the one from Red-Gate and you won’t have to type nearly as much.  But I’m getting sick of laziness being an excuse for bad coding.

There are some exceptions though.  It is ok to use select * for some things but you have to choose those individually and judiciously.  Let’s say that you’ve got an SP that pulls 65 cols from a complicated set of logic that you don’t wanna have to re-create or maintain separately.  And let’s also say that you only need to query it every now and then, or maybe just 2-3 times a day.  In a case like that, it’s probably ok to go ahead and use the SP even though you’re only using a handful of the resultset.  However, in that same scenario, if you were using that data several times a minute, or even a second, then you’re really better off from a performance perspective to go ahead and create your own SP that returns less data.

Another excuse that gets used is people often tell me that they used select * to make it easier to make changes to the app.  And that is logical to a degree.  But people protect themselves all the time from issues that aren’t issues.  For instance, I had this just a while back where someone gave me that excuse and when I probed, the app had been up for 2yrs and had only ever had one minor change.  So what are you protecting yourself from then?  If the app is fairly static, then grow up and do the right thing.

OK, that’s actually stepping on the toes of another post so I’ll stop here.

Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com

Read my book reviews at:
www.ITBookworm.com

Blog Author of:
Database Underground – http://www.infoworld.com/blogs/sean-mccown

Follow my Twitter:

http://twitter.com/MidnightDBA

 

Technorati Tags: ,

Will the Real Idiot Stand-up.

As you know our other DBA just left and he just started his new job. I was IMing him today and asked him how it was going. He said, the last DBA was an idiot. It’s funny, cause I can’t count the number of times I’ve said that. I don’t thing I’ve ever started a job where the guy before me knew what he was doing.

The question is though, am I really all that good or do I just have an inflated ego? I’d probably have to say it’s a bit of both really. I’ve seen a lot of DBAs who just don’t know the simplest things about SQL. I’ve talked about this several times in both my blogs so I’m not going to harp on it too much right now, but it holds more true every year I’m in this industry.

There’s a difference between just doing things differently than the other guy, and his systems actually being neglected. Not performing backups or index maint. is bad DBAing. It’s not just a different way of doing things. I remember talking to a guy who was a very high DBA at a company we all know last year. I was at PASS come to think of it. And he sat there proudly and told me that they NEVER change their sa passwords on any of their systems. I would love to tell you his reasoning, but I just couldn’t get into something like that. But to be proud that you never change your sa password is just assinine. You know what they say dude, if you look around the room and you can’t find the asshole, it’s you. The same goes with idiots.

It’s hard to measure skill though. Everyone has such different experiences. Things I’ve come to know well may be completely foreign to a different DBA who’s far better than I at something else. So does it make him an idiot because he doesn’t know what I know? Yeah, sometimes. The basics should be covered. Every DBA should know what it is to backup a system and do maint. and basic security. And so often it’s these basics that aren’t covered.

So now it comes down to simplicity. What makes a really good DBA? I’ve had several talks with guys all over IT about this same topic, and in almost every session, we’ve pretty much concluded that you only have to try a little bit to be better than the average guy. The average guy does very very little to further his knowledge or to get really good at his job. Most people just skate by. So if you try just a little bit you can rise above the crowd. That’s what I think anyway.

SQL Server Done Right

This is the perfect topic to go along with what I wrote on my other blog today in The real difference between SQL Server and Oracle.

I just got an email from the producer of the new Kalen Delaney series on SQL Server giving me my press pass into the online content for this series. I’ve only watched the 1st 9mins so far and already it’s exactly what I’m talking about in my other blog. Here’s Kalen Delaney who writes one of the most successful series on SQL Server (the other one is by the late Ken Henderson. I still have a hard time saying that), and she’s going the extra mile to put her book into a video training series where she explains the concepts herself.
I, like many other people learn better when things are explained to me than I do from a lifeless page. And Kalen’s an experienced teacher so she has a way of explaining things that make you just get it. Already in this video she’s already covered security of metadata and the sys schema. She’s actually explaining how this stuff fits together from the ground up. That’s how it’s done. I have no doubt that the rest of the series will contain the same deep-level understanding.

I think I’m going to enjoy this series and I’ll try to write-up a full report when I’m done. Or maybe I’ll just do it as I go along.

OK, so here’s the link to the site. You can order the DVD or you can watch it online. It’s good stuff. Seriously, go check it out if you haven’t.
I was recently chatting with Kalen in email and she told me that this is basically the course she teaches when she’s brought into a company to teach a class.

Actually, I didn’t mean this to be an official interview, but I’m going to go ahead and paste her email here. I’m sure she won’t mind (at least I hope not) and she explains it better than I would anyway. I typically don’t post emails without asking first, but she knows who I am and she answered my questions like she was being interviewed, so this one time I’m going to do it. But you’ll almost never see me take this liberty.

1. What material will this first DVD cover…

You can get information about my course here: http://www.insidesqlserver.com/Course%20Description%20and%20Outline.htm
The first DVD covers most of what is in Module 1.

2. What format will it take… will be be a group of slides and whitepapers, or screencast instruction by you…

The DVD will be a mixture of live capture of me talking, and screen captures of my slides and my demos.

3. Who all is involved in the project…

I am recording the class that I have been presenting all over the world for the last several years. Chuck Boyce, of AskaSQLGuru.com is doing the filming and editing. The business side is being managed by Peter Ward of www.WardyIT.com in Brisbane, Australia

4. How often can we expect to see a new DVD come out…

Since I have to fly to New York for filming, we are only able to do about one a month. In fact, I am just about to leave for the airport for the second round of filming.

5. What advantage will one have in ordering these over just getting the books…

Different people learn in different ways. If you like to hear and see someone explaining concepts, this can add to the benefit of the books. People pay a lot of money to attend my classes, but since I’m only one person, I can’t offer them that often. The DVDs are a chance to for anyone, anywhere to get to take my class. If you can read and absorb everything in the books on your own, the DVDs might not offer anything more.

So again, here’s the link to SQLServerDVD.com.