Category Archives: Time-savers

Manage Security Centrally

When you’re in an environment where you need to add user accounts to different servers on a regular basis, it can be a pain to manage the requests because you have to connect to each server, and either use the GUI to add the account, or write the statement yourself. Now, this normally isn’t a big deal, but if you have to do it several times a day that extra time it takes to go through the steps can start to get old.
So here I’ve written a stored procedure that lets you manage the process from a single location. Now, I’m going to caution you up front that this SP is run from a server where the service account already has admin rights on the servers you’re targeting, so you need to make sure this SP is locked down. Make sure that only DBAs you trust and who are responsible can run this. And if you can, make sure even the ones you trust can’t alter it because it logs its actions and you don’t want anyone being able to turn the logging off.
Ok, you’ve been warned and I don’t want any lawsuits or emails telling me that someone did something that messed things up in your shop because it’s all on you. And while I’ve tested this and run it in a few locations, make sure you test it yourself first before relying on it for any real production scenario.
The legal stuff is out of the way so let’s get into the details of the SP itself.

CREATE PROCEDURE dbo.AddServerPerms
@ServerName varchar(200),
@Action varchar(10),
@UserAcct varchar(100) = 'domain\SMcCown',
@Run bit = 0
AS
/*
This SP allows you to add/drop a user to sa on any box you like. It makes the process easier than connecting to the server each time.
Of course, you should always be careful about adding sa accts to servers so you should lockdown the use of this SP to only DBAs that you trust,
and who are responsible.
This SP also logs its actions so you can see which accts were added/dropped, when, and by whom.
This is also meant to be run from an acct that currently has sa on the server.
So replace the Execute as stmt with the acct of your choice.
This is the log table that needs to be in place before running this SP.
CREATE TABLE dbo.SASecurityLog
(
ID int identity(1,1),
ExecutionDateTime datetime,
ServerName varchar(200),
AcctName nvarchar(200),
Action varchar(10),
Code nvarchar(1000),
RunBy nvarchar(200)
)
*/
Execute as Login = 'sa';
DECLARE @SQL varchar(400),
@TotalSQL varchar(1000),
@TotalDropSQL varchar(1000),
@DropSpidSQL varchar(200),
@RunErrors nvarchar(max),
@Result varchar(10);
If UPPER(@Action) = 'ADD'
BEGIN
SET @SQL = 'CREATE LOGIN [' + @UserAcct + '] FROM WINDOWS; ALTER SERVER ROLE [sysadmin] ADD MEMBER [' + @UserAcct + '];'
END
If UPPER(@Action) = 'DROP'
BEGIN --Drop
SET @SQL = 'DROP LOGIN [' + @UserAcct + '];';
--------------------------------------------------------------------------
-----------------------BEGIN Get SPIDs------------------------------------
--------------------------------------------------------------------------
--Get SPIDs to kill.
SET @DropSpidSQL = 'SELECT SPID from sys.sysprocesses WHERE loginame = ''' + @UserAcct + ''';';
CREATE TABLE #SPIDs(col1 varchar(1000));
SET @TotalDropSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @DropSpidSQL + '"'
INSERT #SPIDs(col1)
EXEC xp_cmdshell @TotalSQL;
DELETE #SPIDs
where col1 like '%--%'
OR col1 like '%rows affected%'
OR col1 IS NULL;
--------------------------------------------------------------------------
-----------------------END Get SPIDs--------------------------------------
--------------------------------------------------------------------------
 
--------------------------------------------------------------------------
-----------------------BEGIN Drop SPIDs-----------------------------------
--------------------------------------------------------------------------
----You can't drop a login if it's currently logged in, so here's where we
----drop all the SPIDs for the current user first.
----There's more than one way to do this. I chose this one because I didn't
----want to cram all this cursor syntax into D-SQL, and I didn't want to require
----a script on the drive to be managed. While that may be a cleaner way to code
----this solution, this method is more portable and doesn't require any extra setup.
DECLARE @currSPID varchar(100);
DECLARE SPIDs CURSOR
READ_ONLY
FOR SELECT col1 FROM #SPIDs
OPEN SPIDs
FETCH NEXT FROM SPIDs INTO @currSPID
WHILE (@@fetch_status <> -1)
BEGIN
SET @DropSpidSQL = 'KILL ' + @currSPID + ';'
SET @TotalDropSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @DropSpidSQL + '"'
EXEC xp_cmdshell @TotalDropSQL;
--print @TotalDropSQL
FETCH NEXT FROM SPIDs INTO @currSPID
END
CLOSE SPIDs
DEALLOCATE SPIDs
DROP TABLE #SPIDs;
--------------------------------------------------------------------------
-----------------------END Drop SPIDs-------------------------------------
--------------------------------------------------------------------------
END --Drop
--------------------------------------------------------------------------
-----------------------BEGIN Log Action-----------------------------------
--------------------------------------------------------------------------
INSERT dbo.SASecurityLog
(ExecutionDateTime, ServerName, AcctName, Action, Code, RunBy)
SELECT GETDATE(), @ServerName, @UserAcct, @Action, @SQL, SUSER_SNAME();
If @Run = 0
BEGIN
SELECT @SQL AS AcctSQL;
END
--------------------------------------------------------------------------
-----------------------END Log Action-------------------------------------
--------------------------------------------------------------------------
--------------------------------------------------------------------------
-----------------------BEGIN Run Action-----------------------------------
--------------------------------------------------------------------------
If @Run = 1
BEGIN --Run = 1
CREATE TABLE #Results(ID tinyint identity(1,1), col1 nvarchar(2000));
SET @TotalSQL = 'sqlcmd -S "' + @ServerName + '" -Q "' + @SQL + '"'
INSERT #Results(col1)
EXEC xp_cmdshell @TotalSQL;
--------------BEGIN Get Run Errors------------------------
SELECT @RunErrors = STUFF((SELECT ' ' + col1
FROM #Results AS T1
ORDER BY T1.ID
FOR XML PATH('')), 1, 1, '')
FROM #Results AS T2;
If @RunErrors IS NULL
BEGIN
SET @Result = 'OK';
END
If @RunErrors IS NOT NULL
BEGIN
SET @Result = 'FAILED';
END
--------------END Get Run Errors---------------------------
END --Run = 1
--------------------------------------------------------------------------
-----------------------END Run Action-------------------------------------
--------------------------------------------------------------------------
select TOP(1) @Result AS Result, @RunErrors AS Errors, * from dbo.SASecurityLog ORDER BY ExecutionDateTime DESC;

Now there’s the code for you to use at your leisure. I’m going to discuss some important parts below by line number.
4: You want to be able to use the same SP to add or drop the acct so you can pass the action in.
26: I’ve set this up to run as sa because the service account on the central box has sysadmin on all the servers in the shop. A good scenario for this is when you have a server that’s running a performance app and needs to have sysadmin on all your SQL boxes. I’ll discuss the security implications of this in a section below.

33-39: Depending on the action you take, create the proper syntax.

44-52: There’s more than one way to do these next couple sections of code.  I chose this one because I didn’t want there to be any external setup.  You should just be able to install the SP and start using it.  So here I’m getting the list of SPIDs taken up by the acct on the target server.  You can’t drop a user acct if it’s currently logged in.  So getting the list of SPIDs to delete and then cursoring through them to kill them is what we’re doing here and in the next couple sections.  It’s possible that one of the SPIDs could disconnect and another take it’s place on the server between the time that you get the SPID list and when it gets killed, but it’s unlikely.

66-82: This is the cursor that kills the SPIDs for the current user from the target server.  Again, this could be done a different way and if you really don’t like this then rewrite this section.

90-96: We’re logging the action before we run it.  It’s important to log the action before the SP gets a chance to error out for some reason.

103-108: This is where we actually run the stmt against the target server.  Notice we’re inserting the EXEC stmt into a #table so we can capture the results.  If the stmt is a success it returns NULL so it’s easy to see if it failed.

110-122: If the stmt fails for any reason we captured the error in the previous section so we can put those results into a var and use that to set a status.  The use of STUFF here is how I commonly do this and it requires the ID column, so if you do this on your own, the code is easy to paste into another SP.

128: Return the results back to the user.

Account Security

Managing accounts in an enterprise can be time consuming, and the more servers you have, the more time consuming it can be.  This is why a lot of shops opt for using a single service account for SQL services.  However, this is a very dangerous practice because now you have a single account that runs every server in your network so if there’s a security breach, you have a big problem.  You have many processes running under this account so there are literally hundreds, if not thousands of opportunities for an attacker to do damage.

However, sometimes it’s necessary to run a centralized management server with an account, and give that account sysadmin on each SQL Server.  Quite often 3rd party performance monitors need these elevated rights.  There’s a difference though between having a single account with sa and having all the servers running under the same acct.  In the centralized server scenario, the only processes running on that account are the ones for the application, and often they’ll be running only on the central server.  Therefore, there won’t be any linked servers, jobs, SPs, or anything else running under that account.  So it’s more unlikely that you’ll have a breach against that account.  It can also be locked down so only the DBAs know it whereas often times many people know the service account passwords.  And again, there are many chances for security to get compromised with all of the processes that are likely running under it.

It’s also easier to change the password for that single centralized service account as you don’t have to change it across 100 servers.  So you’re most likely going to be able to adhere to better security principals.

Now, for the record… you should never run all your services under the same user account.  Always take the extra effort to have a separate account for each server.  And at the very least, you can group the servers by application and have a single service account for each application.  It’s not as good as having one for each box, but you’ll at least minimize the damage should you be compromised.

“What users are in this group?”

minion enterpriseWe solved this question.

Update: Sign up for one of our Minion Enterprise demos this coming Friday, July 3!

Minion Enterprise collects SQL Server login data, as well as Active Directory information, for an entire enterprise. The AD expansion module ties this data together to provide so much insight:

  • Find out what users are in a Windows group…especially those groups that have sysadmin privileges!
  • List all users that have SA rights on any instance in the environment.
  • Discover which SQL Server instances a specific user has access to, and via what groups.
  • Filter by environment, location, SLA, server, login type, or any combination of the data available.

These are the exact questions we’ve always needed answered, in every single shop. So, we know this will be immensely useful in your shop.

One client was recently able to reduce their SQL access on one server by two-thirds. They simply used the AD expansion module to identify the rogue group with hundreds of members, and removed that group’s rights.

Take a look at the AD expansion module demo below, and then get in touch for your own 90 day trial license of Minion Enterprise.

 

Video: A Better Way to Reindex

minion reindex-01Edit: Corrected the recording link.

Yestreday I taught “A Better Way to Reindex” for the PASS Performance Virtual Chapter (event link, with recording).

The recording is up, if you missed the live event, or if you want to re-watch it:  https://attendee.gotowebinar.com/recording/8483181173057914370

And by the way, the demo that fails (there’s ALWAYS one demo that fails) is due to the fragmentation routine we use, fragmenting the data way more than usual. All we had to do was to increase the reorg threshold, and it would’ve been fine. So for the record, it’s the fault of the test harness, not of Minion Reindex.

Here’s the abstract:

A BETTER WAY TO REINDEX

Let’s play a guessing game: I guess that you don’t want to spend time on index maintenance, but you know your servers need it. You want something that’s straight plug and play, dependable, and preferably free. I also guess that you don’t want to manage multiple jobs for this one maintenance task, even when you have exceptions and special settings. Am I close on this? Come and take a look at Minion Reindex. It does all those things that you want for index maintenance, plus some really innovative stuff. Watch your reindex progress live. Configure settings and exceptions for individual databases or tables. Gather fragmentation stats separately, outside your maintenance window. There’s quite a lot more so come see how you’ll make your index maintenance a lot easier.

Of course, you should go and download Minion Reindex yourself, and see the documentation, and follow along with all the fun stuff I do in the session.

Allow_Page_Locks for Reorgs

minion reindex-01There are many settings that get set one way or another in DBs and in tables. Allow_Page_Locks is one of them that you may not be able to do anything about because your vendor may require it and your situation may require it as well. Normally it’s set to true, but it does get set to false and when it does, it typically needs to stay that way.
The problem is that when you reorganize these indexes that have allow_page_locks = false, then the reorg will fail. But the problem is that if you change it, you may see increased blocking issues. So what is there to do?

Well, the answer is Minion Reindex. We allow you to define pre and post code at the table-level that you can use to switch this option on and then off again when the table is through. And better yet, we even give you the code to discover all of these issues in your database and fix them. In the Minion Reindex download folder you’ll find a Queries folder. This folder has a sql file that you can run and it will insert the table-level exceptions with the proper pre/post code. The precode sets allow_page_locks = true and the postcode sets allow_page_locks = false. This way you can still have the setting the way you need it, and perform your index maintenance too.

And of course, Minion Reindex is completely free so download it now and you won’t be sorry.

129 of 635

This stat is one of the coolest things you’ll see in Minion Reindex.
FragStats

Gathering fragmentation stats in large databases can take a long time and you have no insight into what’s going on.

With Live Insight, Minion Reindex allows you to see what’s going on every step of the way, including how many indexes you have left to gather stats on.

And we even give you the name of the table and index that’s currently having its stats gathered.  (Our reindexing operations aren’t black boxes. I use this almost every day and I honestly don’t know what we ever did without it.)

NOTE: This feature is turned on by default. Just in case you’ve accidentally turned it off, just make sure that for the current database you have LogProgress = 1 in the Minion.IndexSettingsDB table.

Live Insight…by specific table

It gets even better. You may not be interested in Live Insight for every table.

In this case you can turn off Live Insight for certain tables, or turn it off for an entire database and then turn it on only for certain tables.

This is just one of the many useful features we have in our log.

And don’t forget: you can configure at the database level specifically or use the default DB settings by configuring the MinionDefault database in the Minion.IndexSettingsDB table.

I hope this helps you configure and use Minion Reindex better.

Order by Index Usage in Minion Reindex

Last week, a user asked if he could order index maintenance operations at the table level, based off of index usage. And of course, the answer to that is definitely yes. We designed Minion Reindex to have flexibility to address custom preferences like this without a lot of hassle, and with no extra jobs.

Minion Reindex already allows table ordering: you can give individual tables a heavier ReindexGroupOrder “weight” (higher numbers are reindexed first). All you have to do is change the order based off of your criteria. We’ve written a script –  MinionReindex-OrderTablesByUsage – that updates the ordering for all tables in a database, based off of usage. You’re welcome to customize this script and change the criteria to whatever you like.

A couple notes on the script:

  • Read the introductory comments for instructions and notes. And of course, review the script well.
  • This might have been easier with a MERGE statement, but we wanted it to be compatible with all versions that Minion supports. The script does an insert or an update based on whether that table is already listed in the “Minion.IndexSettingsTable” table.
  • This script can clearly be modified to meet any number of needs, like ordering tables by row count, or even excluding tables that don’t have enough reads.
  • To use this script, you can either add a job step, or encapsulate it in a stored procedure and add it as a DBPreCode in the Minion.SettingsDB table. It’s that easy.

Feel free to use this script or alter it as you need. And if you make it do something really cool, send it back to us and we’ll add it to the community of scripts on the site. We’ve done some initial testing of this script, but nowhere near the level of testing we put the product through so if you find any bugs let us know. MinionReindex-OrderTablesByUsage

Download Minion Reindex at MidnightSQL.com/Minion

Move DB Files with Regex

Hey guys, I just posted a new vid on how to use Regex to format DB files so you can easily move them.
The situation is this… you’ve got lots of DBs you want to move to a new drive or to a new server, and you need to write the ALTER DATABASE commands to move all the files, then you need to script the move statements at the cmdline like Powershell. The problem is to be effective, you need some code to parse the filename from the rest of the path.
Here I’m going to show you how to use Regex to parse that out so you can easily build the statements you need. I can write the script to move hundreds of DB files in under a minute. Come watch.
http://midnightdba.itbookworm.com/Video/Watch?VideoId=407

And here’s the demo code so you can follow along.
MoveDBFiles

Log Management Made Easy

Hey guys… I wrote a nice little log management script I thought some of you might like. It doesn’t do anything to your system so it’s safe to run. Well, I’m turning xp_cmdshell on and off so if you don’t like that then this script isn’t for you.
I’m just putting more info into SQLPERF. These are the most common things I need to know when I have a runaway log. Of course feel free to modify it any way you like. Unfortunately, due to when PS came into play, you can only run this on SQL2K8 boxes and above. If you want to run it on lower boxes you’ll have to take out the PS portion of it.

One more thing before I give you the script. You’re responsible for anything you run on your box. So don’t blame me if something happens and you mess something up. Like I said, I believe this script to be safe but only if you manage the xp_cmdshell portion properly. If you run the script as it is now it’ll turn off xp_cmdshell and if you’ve got processes that rely on it then they will fail. So just be warned yet again that this is a risk. And the reason I’m turning it off again is because I don’t want to open up anything on your box that shouldn’t be. But I really like having the extra info so I don’t have to go look it up. I hope you do too.

Some important notes about the script:
1. It gives you the physical location of the log files, so if there’s more than 1 you’ll see more than 1 entry for each DB.
2. The ShrinkCmd has a variable at the top used to control its default. I like to shrink down to 1GB a lot of times, but set this to whatever you like.
3. You can expand this for yourself in ways I can’t do for you. For instance you could tie it to your log backup job to see when the last execution was and if it failed.
4. I’ve added the last log backup date for you… you’re welcome.
5. The nature of how PS works, you have to change the instance name at the top to the current instance you’re working with. There’s nothing I could do about that. I tried to make it as simple as possible.
6. The PS portion relies on xp_cmdshell. However, I turn it on and off for you in the script. If you want it left on, make sure you comment that portion out of the code or you could have stuff that breaks because you just turned on xp_cmdshell.

declare @LogSize int,
		@Instance varchar(100)
SET @LogSize = 1024
 
SET @Instance = 'localhost\default' -- If default instance then put Default ex: 'MyServer\default'
 
SET NOCOUNT ON
 
CREATE TABLE #LogSpace
(
DBName varchar(100),
LogSizeInMB float,
LogSpaceUsedInPCT real,
Status tinyint
)
INSERT #LogSpace
EXEC ('dbcc sqlperf(logspace)')
 
 
CREATE TABLE ##LogSpacePSTemp
(
DBName varchar(100),
LogBackupDate varchar(25)
)
 
exec sp_configure 'show advanced options', 1
reconfigure
 
exec sp_configure 'xp_cmdshell', 1
reconfigure
 
declare @cmd nvarchar(2000)
		SET @cmd = 'sqlps "cd sqlserver:\sql\' + @Instance + '\databases; $a = dir; foreach($DB in $a){$DBName = $DB.Name; $LogBackupDate = $DB.LastLogBackupDate; invoke-sqlcmd -query ""INSERT ##LogSpacePSTemp SELECT ''$DBName'', ''$LogBackupDate''""" -SuppressProviderContextWarning}"'
 
		--PRINT @cmd
		exec xp_cmdshell @cmd, no_output
 
 
 
select 
LS.DBName
, LS.LogSizeInMB
, LS.LogSpaceUsedInPCT
, D.log_reuse_wait_desc as LogReuseWait
, LT.LogBackupDate as LastLogBackup
, DATEDIFF(mm, LT.LogBackupDate, getdate()) as MinsSinceLastLogBackup
, D.recovery_model_desc as RecoveryModel
, MF.physical_name 
, 'USE [' + D.name + ']; DBCC SHRINKFILE([' + MF.name + '], ' + CAST(@LogSize as varchar(10)) + ')' as ShrinkCmd
from #LogSpace LS
INNER JOIN master.sys.databases D
ON D.Name = LS.DBName
INNER JOIN master.sys.master_files MF
ON D.database_id = MF.database_id
INNER JOIN ##LogSpacePSTemp LT
ON LT.DBName = LS.DBName
ORDER BY LS.LogSizeInMB DESC
 
drop table #LogSpace
drop table ##LogSpacePSTemp
 
exec sp_configure 'xp_cmdshell', 0
reconfigure
exec sp_configure 'show advanced options', 0
reconfigure reconfigure 

An Upgrade Disaster

I got an email from a user at another company today and he told me about the SQL 2012 upgrade they just finished. And apparently it was a disaster. Here’s the general gist of what happened.

They have a huge app that runs their entire business and the vendor talked them into upgrading to 2012. Originally they were slated to do tons of testing and upgrade probably sometime in november. But they decided to not listen to their DBA and instead allowed themselves to be lead by the vendor who told him that SQL upgrade was easy and nothing to worry about. So they did some perfunctory testing and pushed the upgrade to this past week. I know, smart right?

So this vendor did their upgrade for them and it completed ok from what I know about it. The problems came after the upgrade. Now, I don’t have any real specifics, but I do know that it caused a 10hr downtime. One of the directors asked about the backout plan and he was politely told to mind his own business. Everyone is calling the upgrade a disaster. They didn’t have any way to restore in case the upgrade failed in a really bad way… and that means no final backup, no scripted objects, and no mirrored system. This was an in-place all or nothing upgrade.

Just so we’re clear on this, that’s not the way you run an upgrade. Upgrades take plenty of testing from the DB side, and the app side. You should never upgrade anything without extensive testing. And you should always have a backout plan. Now, what does a backout plan really mean? Because I find that often times the backout plan gets overlooked and I think it’s mainly because they have a hard time defining it.

To me a backout plan means a few different things depending on what you’re after. Let’s take this upgrade as an example. No matter how good you think SQL upgrade is, there’s always something that can go wrong. So at the very least, you need to take a final backup of ALL the system and user DBs right before the upgrade. Make sure you kick everyone out of the DB first because it’s not a final backup if there are still going to be modifications afterwards. That’s a good start for sure, and what I’d consider to be a minimum effort. Here’s a quick list of the steps I like to take for an important upgrade such as this:

1. Copy all system DBs to another location. This just makes restore much easier because with DBs as small as most system DBs, you can just drop them back in their original location and you’re good to go.

2. Script all logins with SIDs.

3. Script all jobs.

4. Make sure I have all SSIS pkg projects at the ready so I can redeploy all pkgs if I need to.

5. Do a test restore of the final backup before starting the upgrade.

6. Script any system-level settings like sp_configure.

7. Script any repl, log shipping, mirroring scenarios.

8. Make sure I have pwords to any linked servers. While I try to keep everyone off of linked servers I have to admit they’re a part of life sometimes. And you don’t want your app to break because you don’t know the pword to the linked server. It’s not the end of the world if this doesn’t happen, but it’ll make life easier.

So basically, the more important the DB, the more of these steps you’ll follow. You need to prepare for a total meltdown and make sure you can recover in as timely manner as possible. As I sit here and write this I feel stupid because it seems so basic, but there are clearly those out there who still need this kind of advice, so here it ia.

And if you have a good test box handy, make sure you test as many of these procedures as possible. Script out your logins, etc and restore them to the test box and see if things work as they should. Deploy your SSIS pkgs to your test box and make sure they run, etc. Don’t just rely on what you think *should* work. Actually make sure it works. This is why some upgrade projects take months to complete. It’s not the upgrade itself, it’s all the planning around it. And while this isn’t a full list of everything you could do to protect yourself, it’s a damn good start.
Happy upgrading.

Nothing beats practice

For a very long time now I’ve preached to my various teams the value of doing your 20s.  And I don’t expect most of you to know what that is, so here’s a quick explanation.  I believe there’s certain syntax you should know by heart.  Things like backup/restore, DB creation, working with user accts, etc. should be so ingrained you never forget them.  And the only way to do that is to do it again and again and again.  So when I start new DBAs down their road to discovery, I start them on their 20s.  And that means writing the command you’re working on 20x a day for 2 weeks.  And by the end of the period, you’ve written it at least 200x.  Now, in the case of backup/restore, I typically pick a test server with around 10-15 DBs (I often create empty DBs) on it and make them type the commands for every DB 20x a day for 2 weeks.  And that means that they’re writing the commands 200x for each DB.  And a little quick math tells you that the total they’ve written the backup command by the end is at least 2,000x.  Now, name me something you’ve done 2,000x and have forgotten easily. 

The whole point of this is to give them tons of experience with the commands in a very short period of time.  Innovation and understanding don’t come when you’re struggling to remember syntax and they don’t come if you don’t know any of the options available to you.  So by forcing my DBAs to learn like this, I’ve found that they really start to become competent and ultimately they appreciate that I forced this on them.  And it’s incredible how simple things start getting when you know what you can and can’t do and now that you know how to code it you’ve got a world of options available to you.  And I still find way too many DBAs are button monkeys and don’t know anything that isn’t presented to them in a wizard.

So anyway, the main reason I wrote this is to say that it’s proven itself to me today.  We needed to add a PK to a table and I was in a room with a bunch of SQL devs.  They started to lookup the syntax and I said, wait, give me a crack at it.  I sat down and busted it out without any slips.  And the thing is that this is one of the things I’ve practiced a lot.  But I haven’t touched it in almost a year so I’m a little surprised to find that it didn’t leave me.  Then again, when you do it as much as I have you won’t forget it anytime soon.  So my point here is that if you practice enough you can go a few months without doing it and you’ll still be ok.  And the funny thing is I didn’t even really remember the syntax myself.  My fingers did it for me.  I love how muscle memory works.  So hey, if you really want to get good at SQL then practice some of the more common syntax.  Do your 20s.  You’ll really be glad you did.  And you’ll get things done a lot quicker cause you’re not always looking up basic syntax.