SQL Server DBA Tips & Tricks


Active August

The other day on Twitter Mike Walsh ( Blog | Twitter ) was commenting to Chuck Boyce ( Blog | Twitter ) about how well he was doing with his recent weight loss program. One joke, and a bandwagon tweet later and Active August was born!

So what is Active August you ask? Well Mike outlined the basic gist of it over on his blog and I’ll be following the same general plan with a few personal touches. Here’s the outline from Mike’s blog post:

The Plan–> I will talk about my plan, it doesn’t have to dictate your plan. The only requirements for #activeagust?Get more active than you were in July. My hope is for us all to start something that we continue into September, October and then poof! We have a habit. I will blog occasional about my results but part of the #activeaugust plan  for me is going to include:

  • Eating "Cleaner" –> I will try to get some links in a follow up post though it may be a week before I get that.. Basically eating more raw foods, eating less of those foods, drinking more water, maybe some more milk. Getting away from the sugars and processed foods.
  • Doing Something Every Day –> I sit on my butt at work all day, I come home and play with the kids on most days but then I sit on my butt and work at night when the house is sleeping. For my health (physical and emotional), I am going to commit to doing at least 30 minutes of activity on each day. It could be a walk at lunch, it could be getting back to my swimming early mornings, getting into some weight training (building more lean muscle means I am burning more calories when sitting on my butt) or a combination.
  • Blog and tweet about it –> I am going to tweet every time I hit the the minimum 30 minutes of activity, if I do more, maybe I’ll tweet that also. If I drag you along with me I am going to be more likely to do it. Why don’t you drag us along with you and stick to it.
  • Weigh In –> This isn’t only about weight but I will admit the starting weight, throw it on the blog/twitter and then do so either weekly or at least at the end. I care more about how I feel and you should also.

My Personal Goals –> I will not dictate your goals but I hope they can be along the same lines.

  • Lose Some of my weight –> I need to lose 30-40lbs, a lot of that is the unhealthy fat around my lower abdomen… Do you have that? That’s the weight that is a risk factor of many diseases and problems later in life.
  • Feel Better/Have More Energy –> I want to have more energy when I wake up, I want to be more alive when Friday kicks around and it’s time to get yard work done and have fun with my family.
  • Stick with it –> I want to make changes that are not drastic but livable. Not looking to diet because I like a variety, just reducing and eating healthier to get to more basic foods. Not looking for a workout program that I get bored with or becomes a pain to stick with. Just want to stay conditioned.

For one, I’ve decided to give up sodas. Yes, I know, blasphemous for a geek to part with a main source of caffeine but alas I must give up the “liquid satan”. In lieu of soda I’ll be upping my water intake, something I should be doing anyways, and drinking un-sweet tea with artificial sweetener.

As for the active part I’ve decided to dedicate at least 30-45 minutes a night to (at the very least) walking or doing something physical. The easiest activity I’ve found that’s practical and gets me off the couch is walking my dog. Another thing I’m ramping back up is riding on my road bike. This past weekend I rode 7.5 miles just to get back in the saddle and get a feel for it again. I hope to ride at least 3 times a week. I used to do 10-20 miles when I rode before so hopefully I’ll get back to that volume.

So to be perfectly honest I’ve already started this program and it’s been joked about that I’ve now started Jumpstart July. Anywho, to get this party started here’s an Excel chart that I’ll post the weekly progress of my weight.




Upcoming Speaking Engagements

Looks like August is definitely shaping up to be a busy one for me! First I’ll be presenting at SQL Saturday #16 in South Florida. That takes place on Saturday, August 8th. I’m really excited for this event since it’ll be the first SQL Saturday I’ve presented at.

Following that I’ll be heading over to Melbourne the following week on Thursday, August 13th for the Spacecoast SQL Server User Group meeting. At both locations I’ll be presenting my Policy Based Management 101 session. Then to finish off my month the following week I’m moving to a new place so August will definitely be an active one. Speaking of which, check out and participate in Active August! I’ll be writing up a series blog posts about my #activeaugust experience as well.

Sweet! Looks like I just confirmed with Jack Corbett (Blog | Twitter) that I’ll be presenting for the Orlando SQL Server User Group on September 15th!



Recap: Performance Tuning and Query Optimization

Tonight was our monthly SQL Server User Group meeting and our featured presenter this evening was Plamen Ratchev (Blog) presenting on performance tuning and query optimization. First off, he has an awesome accent. I think he’s of Croatian descent from what he mentioned but he rolls his R’s something fierce. I should’ve had him say the phrase “reporting services” a bunch of times just to make me giggle. Anyways, I’m way off topic…

He opened with a quote from Donald Knuth that stated

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil

This was a good point to open with in that if you try to focus on only performance in the development phase you’re more than likely going to perform an epic fail. He then went on to show the differences between being reactive and proactive in terms of performance tuning. His take was that in Europe their development processes focused more on being proactive and trying to take care of issues before they become major problems as opposed to trying to run around putting out fires all the time like many a DBA (myself included) is forced to do on a daily basis. In an interesting story he relayed to us he told us about how he had come to the U.S. and pitched a European software that allowed a major car manufacturer to improve their processes. The software would basically alert you if anything in production fell “out of the norm” and would advise actions on how to fix it. The American company had a different approach to their process. They basically hired a team of high-priced consultants to come in for a week, measure everything nuts to bolts about what’s wrong in the production process, produce a report and leave (whether or not problem got fixed). What surprised me most was that he told us the manager told him if there’s a problem they pretty much just build another assembly line somewhere else rather than fix current issue. If you’re wondering why they need Federal bail-out money, this story should give you a slight clue.

The presentation continued on with things such as common performance issues you’re likely to find such as inefficient queries, retrieving too much data, inefficient or missing indexes and a few other things. This was a nice list to see for both devs and DBA’s alike so everyone is aware of these common mistakes. This lead to topic of problematic designs such as the “one-true lookup” table issue. This is when rather than normalizing your data someone decides its easier to just throw everything in to one large table and add columns as needed later on. I can see some of you cringe when you read that since you’ve probably seen that in production somewhere at some point.The surprising thing that came out of this example though (to me anyways) was that sometimes this setup actually makes sense for very specific applications such as a simple survey or a medical application that is only storing straight facts (i.e. patient monitor). Another oldie but a goodie is mismatched data types on join columns or filters. While this may work without a problem, when you throw a heavy work load at something like this you’ll see performance tank because behind the scenes the data engine is having to do lots of implicit conversions to process that query for you. So remember that little tidbit next time you’re planning with your devs and database developers.

The next interesting thing I learned was regarding data type optimization. Do you know what the one of the fundamental differences is between VARCHAR and NVARCHAR besides one taking up twice as much space as the other? NVARCHAR handles multiple collations while VARCHAR is more for single so if your application is only going to be delivered via a single, default collation then stick with using VARCHAR.

I could go on and on but needless to say this was an extremely insightful and useful presentation. Another user group member, Ron Dameron (Twitter), noted in Twitter this evening

…seen this deck twice now. Learned new stuff both times. Thx Plamen

If you ever get a chance to attend one of Plamen’s presentations at a live event I highly encourage you to do so as he’s a brilliant guy and presents well. If he’s not coming anywhere near you, you can still check out his presentation stylings by watching his videos over at JumpstartTV. In closing here’s some book recommendations he threw out at the end of his slide deck. Basically this is just an excuse for me to use the cool Carousel feature from Amazon!



Policy Based Management with SQL 2008

A few weeks back I presented at the Tampa SSUG meeting on Policy Based Management with SQL 2008. Unfortunately due to sickness and apparently my own laziness, I never got around to doing a follow-up post or posting my slide deck. First the slide deck:

Powerpoint 2007 version

Powerpoint 97-2003 version

As a warning the deck itself is a bit sparse with how-to info as the meat of my presentation is in demo format. Given that fact, I will start a series of blog posts covering Policy Based Management and how/what you can do with it. Keep an eye out in the coming weeks for those! In the meantime a big thank you for everyone that attended the Tampa presentation, hopefully I’ll be giving this same presentation again at SQL Saturday South Florida in August.



How to Create a Server-Side Trace with SQL Profiler

This morning on Twitter my friend (and as per request, “handsome moron”, but trust me he’s no moron) Jeremiah Peschka (Twitter) asked about scheduled traces. I suggested he do a server-side trace. Unfortunately when you Google how to do this it just tells you how to do this via T-SQL but to my dismay I didn’t see an article on how to create/set this up from SQL Server Profiler. So this will be a quick rundown on how to create your trace using the GUI of SQL Server Profiler and schedule your trace to run at a time of your choosing (so that way you don’t have to wake up at 4 am to kick off a trace).

1. Cut a hole in the box…oh wait, wrong instruction set. Open up SQL Server Profiler and create a new trace. To open Profiler you can either launch it from within SSMS (SQL Server Management Studio) by going to the Tools menu and selecting SQL Server Profiler. The other way to launch it is by clicking on Start button, going to Programs, Microsoft SQL Server 2008 (or 2005), Performance Tools and you should see the shortcut for Profiler there.


2. Check your save to options. Since this will be running on the server itself you’ll have to make a decision here. If you save to table try to save to a database that is NOT the monitored server, preferably into a separate database/table you create for this purpose. You can create a separate database for this purpose and either keep the database for reporting purposes or drop it when you’re done with your analysis. If you don’t want to create a whole new database you can create a new table in an existing database. The advantages of saving to a table are that you can use native SQL tools to do your analysis or even Excel. Preferably you’ll want to save trace data to a file as its faster but for this demo the save to database method is the one we will use.

a. When you select this option you will be prompted to connect to a SQL instance of your choosing. Select the server you’ll be doing your trace on.

b. Here you can name your new table (creates table for you, assuming you have permission to do so).

c. Your other option is to save to file. This will create a series of .TRC files wherever you specify. The advantage to using this option is that your results become portable in that you can move and import those files wherever you need them to do your analysis (i.e. copy them to your local PC and do analysis locally).

3. Configure your trace’s stop time. This is important as you want to make sure you have a consistent slice of time measured. This can later be modified within the script if you’d like.

4. At this point we can customize our trace. Click on the Events Selection tab. Here you can select the events and filters you’d like to trace on. Be careful as to not choose too many options/filters as this can bog down the trace and cause performance problems on the server itself since you’re trying to log so many things at once. For purposes of this demo I’ve chosen the Standard (default) template and default settings.

5. Run your trace…and then stop it immediately. This sounds a little strange but keep in mind we’re not running our trace right now, we just want the script that makes up this trace. You start the trace by clicking the Run button at the previous screen. Once the trace starts you stop it by clicking on the Stop button or by going to the File Menu and selecting Stop Trace.

6. Collect your trace script. Once you’ve stopped your trace go to the File menu, Export, Script Trace Definition, select For SQL Server 2005-2008. Select a location to save your script (.SQL) file.

7. To see your script, open it up in SSMS. In the script you’ll see all the options you chose in the Profiler GUI. The @DateTime variable is the option for when to stop your trace so modify that as needed. Of note, check the number after @TraceID ouput (in screenshot it’s 0). If you leave it at 0 the trace file will fill up and not rollover. To avoid this, replace that with 2. You can read all the options on Books Online.

UPDATE: One of the unfortunate parts of this is that the scripted trace only supports writing trace to a flat file, so you can’t specify SQL table like in the GUI version. In this trace file make sure you specify location for your trace file location as well.

8. Now to schedule your newly created script. In SSMS connect to the server you want to trace on. Go to your SQL Server Agent and expand it so you can see the Jobs folder. Right-click the Jobs folder and select New Job.

9. In the New Job Step dialog, click on the Open button. Browse to where you saved your trace script file, select it and click Open. It might prompt you twice, just repeat. Once you’ve loaded it you should see the contents of your script file in the Command window. NOTE: At this point you can also choose what account to run this script as by selecting an account from the dropdown menu under Run as. Click OK when you’re done.

10. Next we need to schedule your trace. Click on the Schedules link on the side menu. You can pick an existing schedule or create a new one. Here you’ll want to create a New schedule. Give your schedule an easily identifiable name. Since we’re creating a custom schedule you can do a one-time job or you can choose to run this on a recurring schedule (i.e. once a month). The recurring schedule would be advantageous if you’re looking to monitor your SQL performance on a regular basis and want to make sure you trace the same actions every time. One thing to note is that you want to set your start time here under the ‘Occurs once at’ section. Remember, you’ve already scheduled your stop time for the trace within your script. Once you’re done customizing your schedule click OK when you’re ready to proceed.

11. Click OK to finish creating your new job. To see your newly created job look under the Jobs folder in SQL Server Agent. Congratulations, you’ve now created an automated server-side trace using SQL Server Profiler!