Recent Articles:

Decree: Set your partition offset and block size – make SQL Server faster

If I’ve said it once, I’ve said it 1,024 times: your SQL data and log drives need a 1024 KB starting offset, and a 64 KB block size*.

Your SQL data and log drives need a 1024 KB starting offset, and a 64 KB block size.

If you read this blog, you’re likely a data professional. And if you’re not sure what I’m talking about, then here’s the very bare bones basics you need to know:

  1. The disk system under SQL Server is not set up optimally by default.
  2. It’s super easy to set it up optimally, if you take care of it at the beginning. That is, when the drive is first created, before you’ve got data sitting on it.
  3. Just tell your SAN guy / hardware guy, when he goes to set up ANY new drive for SQL Server, that it needs to be formatted with a 1024 KB starting offset, and a 64 KB block size. He’ll say “okie dokie” and all will be well.

Again, this is super easy, and free, IF you pay attention when you’re setting up a new drive. Got a new server? New SAN? Assigning a new drive to SQL? Starting offset and block size, baby.

How to Check Your Disks

You have are a number of ways available to check your partition offset and block size, but I find these the most reliable and simple. You can run both commands from the a command prompt (run in Administrator mode).

Partition offset:

powershell.exe “get-wmiobject win32_diskpartition | select systemname, name, index, startingoffset | format-table -autosize”

Returns something like this:

systemname name                  index startingoffset
---------- ----                  ----- --------------
myserver    Disk #0, Partition #0     0        1048576
myserver    Disk #0, Partition #1     1     1049624576
myserver    Disk #0, Partition #2     2     1322254336
myserver    Disk #0, Partition #3     3     2505048064
myserver    Disk #0, Partition #4     4   239810379776
myserver    Disk #0, Partition #5     5   244105347072

This is on my laptop, so there’s only one disk (Disk #0). You’re only interested in the Partition #0 for each disk, because that’s the partition at the start of the disk (which contains the partition offset). My laptop has a proper starting offset: 1048576 bytes = 1048576/1024 = 1024KB.

Block Size:

You CAN retrieve “BlockSize” using the Powershell statement above, but I have found it to be consistently inaccurate. Instead, do this for each disk:

FSutil FSinfo NTFSinfo F:

Returns something like this:

NTFS Volume Serial Number :       0x9c78028378025c80
NTFS Version   :                  3.1
LFS Version    :                  2.0
Number Sectors :                  0x000000001ba03fff
Total Clusters :                  0x00000000037407ff
Free Clusters  :                  0x000000000053950a
Total Reserved :                  0x00000000000007d0
Bytes Per Sector  :               512
Bytes Per Physical Sector :       512
Bytes Per Cluster :               4096
Bytes Per FileRecord Segment    : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x00000000174c0000

All you’re looking for is Bytes Per Cluster. On my laptop, it’s 4096 bytes. Hey, I don’t have dedicated SQL drives on here. But what you want to see is Bytes Per Cluster : 65536.  Again, that’s 65536 bytes / 1024 = 64KB, which is what you want for the disks that will hold SQL data and log files.

Your SQL Data and Log Drives Need a 1024Kb Starting offset, and a 64Kb Block Size

“Windows Server 2008 partition alignment defaults to 1024 KB (that is, 1,048,576 bytes). This value provides a durable solution. It correlates well (as described later) with common stripe unit sizes such as 64 KB, 128 KB, and 256 KB …”

Conclusion and Invitation

So, be sure to put this in the best practices documents, and your standard for new SQL Server installations. And be sure to give your SQL data and log drives a 1024 KB starting offset, and a 64KB block size.

Remember that all my blogs are licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License, so steal this entire blog post, put it on your blog (with a link back to me and a note that says something like “reprinted by permission from Jen McCown”). Spread the word.

And, with very very few exceptions, your SQL data and log drives need a 1024 KB starting offset, and a 64Kb block size.

Happy days,
Jen McCown

* As always, there are few rules without exceptions. Some systems may indeed benefit from a different block size; but that should be determined via testing. It should be deliberate.  If I can get the free world (of SQL Server) checking their block size, and setting it on purpose, then I have done good.

Powershell is now “PowerCloud”

April 1, 2014 News-ish No Comments


Microsoft, in its usual vein of focus-on-the-wrong-thing, has decided to rebrand the anticipated Powershell 4.0 as PowerCloud.  To keep the new theme consistent, cmdlets will now be called “cloudlets”.

The Dali poster looks FINE next to Stonehenge. Can we order the pizza now?

The Dali poster looks FINE next to Stonehenge. Can we order the pizza now?

When I heard, I was – of course – livid.  Yet ANOTHER name change as fickle as a college student rearranging Ikea furniture in a smelly 10×12 dorm.

But then I started thinking about it. This is actually the first recent rebranding that makes sense to me. I get where the whole “cloud” thing comes from originally – here’s the middle of the muddle, where communication and stuff happens – but I’ve been really tired of Cloud services, Cloud databases, email in the Cloud – wasn’t it ALWAYS there?? – and cloudy cloudy Cloud-cloud.

But think about it: Powershell (excuse me, PowerCloud), as strong as it is on a local server, is MOST powerful when it’s applied across an enterprise: many AD servers, SQL servers, and so on.  Across a CLOUD of (admittedly, usually in-house) resources.  Get it? The new name puts the emphasis on the strength of Powershell. I mean PowerCloud.

And really, if I’m going to be accepting of this new change, I should just relax and accept the inevitable: Microsoft is committed  to the Cloud, in every way. Windows Azure, Cloud services, PowerCloud. I suspect the next moves will be rebranding more traditionally in-house Microsoft products, like

  • Windows => Windows Sky
  • Internet Explorer => Explorer Blue
  • SQL Server => Thunderbolt Data
  • Office => Storm Front
  • Surface => Firmament Tablet
  • Visual Studio => Heaven’s Workshop

trollfaceI for one welcome our newly renamed shell interface overlords.

Happy Powerclouding, y’all…
Jen McCown


** Today’s soundtrack: Skyscrapers, Pennies from Heaven, Cheek to Cheek, Free Fallin’, Goodbye Blue Sky, Learning to Fly, Blue Sky Mine

GetLogSpace – table and stored procedure

We’ve talked before about how to get log space usage information in SQL Server.  After the 1,032nd time I re-wrote my custom query to also see the UNused log space, it occurs to me that this would be a nice thing to have on your server on a permanent basis.

Introducing: GetLogSpace!  It’s a simple enough thing:

  1. Use the script to create the table and stored procedure.
  2. Run the SP in ReportOnly mode (default mode, @ReportOnly = 1) to see the current log space information.
  3. Or, run it in log mode (@ReportOnly = 0) to save that data to the table. When you run the SP in log mode, it will delete data older than @retentionDays days old (default is 2 years).

The SP returns the regular SQLPERF(LOGSPACE) data, plus a simple calculation of how much log space is free. For example:

DatabaseName LogSizeMB LogSpaceUsedPct LogSpaceUNUSEDmb LogSpaceUNUSEDmb_TOTAL StatusNum Logdate
master 1082.805 1.151 12.466 61.783 0 NULL
model 0.742 42.697 0.317 61.783 0 NULL
msdb 31.68 10.072 3.191 61.783 0 NULL
tempdb 0.742 69.211 0.514 61.783 0 NULL

It’s that Log Space Unused MB column I like…this shows you at a glance how much space is free in the file.

As I note in the SP header:

You could schedule this to run every day/week/month, to track log usage over time. Or you could just run the report to see the current usage.

Note that I’m NOT advocating the shrinking of log files willy nilly.  Your logs need room to breathe, and if they regularly take up 150Gb,  they probably need 150Gb. In the cases where you’ve missed log backups  on a FULL mode database, you need to back up that log, then (MAYBE)  shrink the log file to a more reasonable size.

*** TL;DR: Don’t habitually shrink log files! ***

Remember, kids: Use and/or modify this script at your own discretion!

Download the GetLogSpace script here.

Happy days,
Jen McCown

MidnightSQL Consulting

Need help? Got an emergency? Write us at!

We can schedule time to help with your backup/restore issues, high availability and disaster recovery setup, performance problems, and a great deal more. Very often, we're even available on the moment for downtime issues and emergencies.

For more information about MidnightSQL consulting, email us or check out Happy days!

Where are We?

April 14: Oklahoma City SQL UG's 5th Anniversary
April 28-30: Powershell Summit in Bellevue, WA
May 10: SQL Saturday Houston
August 2: SQL Saturday Baton Rouge (planned)
August 22-23: SQL Saturday Oklahoma City Precon!

Blog Posts by Category


How to Eat Pop-tarts
Watch DBAs@Midnight live on Fridays,m 11pm Central time

The best database career advice you’ve never heard!

The DBA Roadmap Seminar is 7 MP3 tracks (over 5 hours!) of insider guidance on your database career. We'll teach you how and what to study as a DBA, weigh in on controversial resume debates, teach you to recognize a worthy recruiter, and discuss the new professionalism of interviews. Also some bonus materials, PDF companion guides, and really spiffy intro music!

Once your $99 PayPal payment is submitted, you'll get the download link in e-mail! (Download is a 370Mb ZIP file.)

Become a DBA. Become a BETTER DBA. Use the Roadmap.

Visit for info, forums, and more!

Add to Cart View Cart

Cunningham’s Law

"The best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer."