SQLChicken.com

SQL Server DBA Tips & Tricks

By

So Long and Thanks For All The Fish

Not to try and steal Satya Nadella’s thunder today at WPC but I’ve got some exciting news of my own to share. I’m excited to say that I’ll be fulfilling a lifelong dream and going to work for Microsoft!

Every Beginning Has an End

matrix3This news is kind of a bittersweet though as I’ll be leaving Pragmatic Works after almost four amazing years. It has truly been an awesome experience. In the last four years I’ve had the opportunity to work with some of the best folks in the business and had opportunities I would never have dreamed of. Here’s a highlight list of some of the best parts:

  • The amazing people I’ve had the honor to work (and karaoke) with
  • Learning the ins and outs of SQL Server from some of the best in the world
  • Opportunities to write on major SQL Server book titles
  • Learning and teaching exciting new technology like Parallel Data Warehouse/APS
  • Developing and delivering exciting training content
  • Helping others through mentoring and teaching in the Pragmatic Works Foundation
  • Ability to shape and have input to software offerings
  • Company culture built around giving back to SQL Server Community
  • Literally scaling mountains

I’d like to give a special thanks to Brian Knight (Blog | Twitter), Adam Jorgensen (Blog | Twitter) and Bradley Ball (Blog | Twitter). My time at Pragmatic Works was great and I couldn’t have asked for better people to work for. Thank you guys for everything.

 

So What’s Next?

The good news is that in my new position I’m afforded the opportunities (and encouraged!) to continue being involved with the community. You’ll still see me presenting at SQLSaturday events, picking up the slack on my blogging, webinars, and hopefully presenting at future PASS Summit (buahaha, now I get to submit under the Microsoft call for speakers!) and other conferences.

I’ll also be delving into some other areas, mainly in the development space. You’ll be seeing some new content from me in areas such as BI development, Big Data, Cloud development, and Windows/Windows Phone development. Should be fun!

And like that…he was gone

Share

By

T-SQL Tuesday #48 Roundup

20121003200545.jpg

A big thanks to everyone who participated in this month’s T-SQL Tuesday (link) blog party. This month’s topic was to give your thoughts on Cloud. Lots of interesting reads after the break.

Read More

Share

By

T-SQL Tuesday: Head in the Clouds

20121003-200545
This month’s T-SQL Tuesday is hosted by yours truly. Our topic this month is simply the Cloud. If you work in IT there’s approximately zero chance that you’ve managed to avoid this word in some respect. Has your manager asked you to look into what cloud solutions can do for you? Are you ahead the curve and have taken it upon yourself to already start using and/or testing cloud solutions? This month I asked everyone to share what their thoughts were on the cloud.

Choices, Choices

When people talk about cloud solutions there are a myriad of options you could be talking about. Since this is a SQL Server focused blog, I’m going to focus on offerings specific to that. More specifically I’ll be talking about offerings from Microsoft’s cloud solution, Windows Azure, since that’s the platform I have experience with.

In regards to choices around SQL Server in the cloud there are two routes you can take: use Windows Azure SQL Database (WASD). This offering is known as Platform as a Service (PaaS). What this offering does is it offers developers a relational platform to develop against quickly and easily without the hassle and worry of the administrative overhead that goes with standing up a full SQL Server server. The drawbacks here are there are certain limitations around this option but I’ll drill into that in further detail below.

The second solution you’ll come across, and my personal favorite, is Windows Azure Virtual Machines. This offering is referred to as Infrastructure as a Service (IaaS). What this gives you is an on-demand, scalable compute infrastructure. In non-marketing speak it basically means you can spin up a virtual machine with SQL Server already installed, storage allocated, and customized number of CPUs and memory in minutes instead of waiting around for your IT department to go through its normal provisioning cycle. If it sounds like I’m advocating completely circumventing your company’s policies and going rogue, I’m not. More detailed thoughts on this offering below as well.

WASD: Hey DBAs, It’s Not For Us!

Ever since Azure came out and rolled out the various SQL Server offerings I’ve been trying to wrap my head around this particular facet of the solutions offering. Ever since it came out (and was still called Azure SQL Databases), all I could do was focus on its limitations what it couldn’t do.

Some of those limitations have changed/increased over time such as database sizes. At first the largest database you could create was 50GB. Now you can create databases up to 150GB in size and you can shard you data out so you can get beyond that 150GB size barrier if you need to. However sharding data like that requires different coding techniques that your development team likely isn’t doing today.

Additionally there are other restrictions like requiring a clustered index on every table, which isn’t necessarily a bad thing. Since this database is in the cloud another issue developers need to code for is network connectivity. Network connectivity can (and will) drop on occasion so it’s necessary to code retry logic for connectivity in your application. Finally if you write a bad query that causes the transaction log to “blow out”, your connection gets throttled for a time. For me, as a DBA, all these restrictions why would anyone in their right mind want to use this?! And therein lies the crux of the issue: I’m a DBA…this isn’t a solution meant for me.

It wasn’t until having some conversations with folks at this year’s PASS Summit that the whole use case, and my understanding, of WASD really clicked into place. After attending Connor Cunninham’s (Blog) pre-con on Azure Data Platform, attending Grant Fritchey’s (@gfritchey | Blog), having conversations with Lara Rubbelke (@sqlgal | blog ) and Eli Weinstock-Herman (@Tarwn | blog ) amongst others I came to a realization about PaaS: It’s not meant for me, so I really shouldn’t be bothered by the fact that it can’t do X, Y, Z. Just because it has the SQL Server label on it, doesn’t automatically mean I, the DBA, need to own it! “But Jorge, in my shop if it says SQL on it I end up supporting it anyways!”. Well that’s okay, because with PaaS the administrative side of things are handled (for most part) by Microsoft. These tasks include backups, hosting/provisioning of servers and day to day administration of the infrastructure that supports the solution.

Long story short, this is a solution aimed at developers. They just want a relational data store to develop against without headache of waiting for someone to provision them an environment to do so, nothing more. Think this isn’t happening today with devs? Check out this Twitter conversation I had with Scott Hanselman (@shanselman | blog) recently:

Scott not only uses WASD for development purposes, he wasn’t even sure what I was talking about when I asked him if he used WASD, that’s about how transparent they’ve made it for developers. The conversation was based around my discovery that not all administrative pieces of WASD had been ported over to HTML5 yet from Silverlight. He didn’t know because as a developer that’s something he never had to deal with or care about. In the words of Martha Stewart “that’s a good thing”.

OH NOES, CLOUD HAZ TAKEN MAH JOBZ!

Don’t worry, dear reader (totally ripped that from @sqlballs), not all is lost and no your job as a DBA isn’t going anywhere. If anything, your job stays intact and it’s going to evolve. Evolution is good, ask Charles Xavier. If anything the rise of cloud technology not only cements your role in the company but will actually upgrade you a bit as now you evolve into more of an architect role. Still like staying close to the technology? It’s still there and not going anywhere. We still have our on premise options. Not only that, we have pretty cool cloud options that are made to work hand-in-hand with our on premise environments. Which brings me to my favorite option…

Windows Azure Virtual Machines FTW

I love virtual machines. I especially love Windows Azure Virtual Machines (WAVM). Not only do they keep my job intact, in that you’re still doing everything you, as a DBA, do today in administering and using full SQL Server in an operating system but it also makes my job a hell of a lot easier in some respects.

One of the coolest things about WAVM is that Microsoft provides you with a nice selection of pre-built template virtual machines for you to choose from. SQL Server 2008 R2 + SP1 on Windows Server 2008 R2 + SP1, it’s there. SQL Server 2014 CTP 2 on Windows Server 2012 R2. Only a few clicks away. Not only that, you can fully customize these virtual machines’ resources such as number of CPUs, how much memory allocated to it and disk space. Disk space should probably be the best news anyone who has had to beg a SAN admin for disk space has heard. You also get the benefit of applying high availability options as well as backup protection options in a few clicks.

So if it’s just a virtual machine, just like you have today in your datacenter, what’s the big deal? Well there’s a few things. I just mentioned that self-service ability. Unless your enterprise has invested in a full blown Private Cloud solution then you probably don’t have anything like that available to you. Today you’re putting in a request, or opening a ticket, outlining what you want and writing up justifications for it. Then you get to wait for the network team, SAN team, sysadmins and DBAs to all do their part in setting up the machine then finally turning it over to you.

Fantastic…What’s The Catch?

I know, I’m painting a rosy, unicorn-laden picture. Well fact is there are certainly some things about WAVM you need to consider. First, it’s not connected to your network. Not a problem…maybe. There are ways to have your network extended out to the cloud through Windows Azure Virtual Network. If you were to extend your network out to Azure, you can also stand up a domain controller out there so any virtual machines you spin up out there look and feel just like any other server on your corporate network.

Okay then what about reliability? Each component of Azure offers its own SLA, which you can see here. As of time of this article the stated SLA for the virtual network is 99.9% and other cloud services (virtual machines with availability sets) at least 99.95%. Do you get that sort of SLA at work today? You might. Well compare that level of reliability and service compared to what you’d pay using Azure versus what your company paid to set up the infrastructure and staff to offer the current level of reliability.

What’s security like? Well I’ll be blogging and presenting more on Azure security this coming year but for purposes of this post I’ll condense it. It’s as good as you make it. Just like your current environment. Again, because we’re talking virtual machines it’s all the same as what you’re doing today inside your data center. In fact, I would bet that most of you currently work in places where your company’s datacenter is actually located outside your company and hosted by someone else (e.g. colo sites). In these massive datacenters you have racks and racks of servers and equipment that are bought and paid for by customers of the host but are physically located side by side. Azure is also a co-located situation but you have a little more dynamic control over where components of your solution are located.

Okay so we have our virtual machines “hanging out” in public for anyone to get to then? Not exactly. The virtual networks you configure, by default, essentially have their tin foil hats on and are not open to the world. Portions that you do open up you have to explicitly grant access through the firewalls in place. How about that data in storage? Again, how much do you secure it today? If you leave it unencrypted, at rest, in your data center today then you’re potentially exposing it to theft as well so technically this risk exists in both worlds. In the end, with security, there comes a point where it’s simply a matter of trust. Trust Microsoft to secure their data centers. Trust yourself to do your job correctly and secure what needs to be secured. This last point brings me to my final epiphany about the cloud, thanks to Grant Fritchey/Tom LaRock (@SQLRockstar | blog ) for this one…

The Cloud Forces You to Do It Right

This goes for both PaaS (especially) and IaaS. One of the best things I heard at Summit this year was Grant ranting on how WASD forces you to code correctly. Write code that forces a log to start blowing out and it kills your session? Well write it correctly to avoid that. Network glitches can and will occur. Have you written retry connection logic into your application? I guarantee you will now.

Like it or not we’re seeing a fundamental shift in how computing solutions are offered and used. We’re seeing a world of consumerization of IT (I hate myself for typing that marketing buzz phrase) where end users expect the freedom to pick and choose their solutions and don’t want to wait for the black hole that IT can be to respond to their needs. They will discover solutions like Azure, see how fast they can do stuff on their own, and potentially get themselves in a bind. Instead of coming off as the stodgy group that doesn’t want to help, embrace these solutions yourself and offer them up with guidance. In the end it’ll be a win-win for everyone.

How do you feel about this whole thing? If you didn’t write your own post this month I’d love to hear your thoughts in comments below.

 

 

Share

By

Necromancing the Chicken

First off, I’m sorry this space has been dead for so long. I’m sure I’ve dropped off many an RSS reader (as well as SQLRockstar’s Rankings) due to inactivity. Well, it’s high time I get back on the blogging wagon! So what’s happened since the last post (YIKES!) in September? Read on after the break to find out…

Zombie chicken

Rumors of my demise have been exaggerated…

Read More

Share

By

24 Hours of PASS Fall 2012

Do you like training? Do you like FREE training? Then make sure you check out the 24 Hours of PASS event that starts today at 12:00 GMT. If you miss the events of the day, no worries, the recordings will be up within a few weeks.

Check out the great schedule of events:

Thursday, September 20, 2012

Session 01 (BIA) – Starts at 12:00 GMT
Choosing the Right Reporting Platform
Presenter: Brian Knight, Devin Knight

Session 02 (DBA) – Starts at 13:00 GMT
Best Practices for Upgrading to SQL Server 2012
Presenters: Robert Davis

Session 03 (AppDev) – Starts at 14:00 GMT
Three Ways to Identify Slow Running Queries
Presenter: Grant Fritchey

Session 04 (AppDev) – Starts at 15:00 GMT
Fasten Your Seatbelt – Troubleshooting the Most Difficult SQL Server Problems
Presenter: Klaus Aschenbrenner

Session 05 (CLD) – Starts at 16:00 GMT
SQL Server Private Cloud != Azure
Presenter: Allan Hirt, Ben DeBow

Session 06 (AppDev) – Starts at 17:00 GMT
What are the Largest SQL Server Projects in the World?
Presenter: Kevin Cox

Session 07 (AppDev) – Starts at 18:00 GMT
Practical Demos of Text Mining and Data Mining using SQL Server 2012
Presenter: Mark Tabladillo

Session 08 (DBA) – Starts at 19:00 GMT
PowerShell 101 for the SQL Server DBA
Presenters: Allen White

Session 09 (BID) – Starts at 20:00 GMT
Mobile Business Intelligence
Presenter: Jen Underwood

Session 10 (BID) – Starts at 21:00 GMT
Slow MDX Queries: The Case of the Empty Tuples
Presenter: Stacia Misner

Session 11 (DBA) – Starts at 22:00 GMT
Using SQL Server 2012 Always On
Presenters: Denny Cherry

Session 12 (PD) – Starts at 23:00 GMT
Leadership – Winning Influence in IT Teams
Presenter: Kevin Kline

Friday, September 21, 2012

Session 13 (BIA) – Starts at 00:00 GMT
BI Architecture With SQL 2012 & SharePoint 2010
Presenter: Rod Colledge

Session 14 (DBA) – Starts at 01:00 GMT
DBCC, Statistics, and You
Presenters: Erin Stellato

Session 15 (BIA) – Starts at 02:00 GMT
SSIS Design Patterns for Fun and Profit
Presenter: Jessica Moss, Michelle Ufford

Session 16 (AppDev) – Starts at 03:00 GMT
Characteristics of a Great Relational Database
Presenter: Louis Davidson

Session 17 (BIA) – Starts at 04:00 GMT
What’s All the Buzz about Hadoop and Hive?
Presenter: Cindy Gross

Session 18 (AppDev) – Starts at 05:00 GMT
Taking SQL Server Into the Beyond Relational Realm
Presenter: Michael Rys

Session 19 (BIA) – Starts at 06:00 GMT
Agile Data Warehousing with SQL Server 2012
Presenter: Davide Mauri

Session 20 (AppDev) – Starts at 07:00 GMT
Digging Into the Plan Cache
Presenter: Jason Strate

Session 21 (BIA) – Starts at 08:00 GMT
Introduction to Microsoft’s Big Data Platform and Hadoop Primer
Presenter: Denny Lee

Session 22 (BID) – Starts at 09:00 GMT
Big Data Analytics with PowerPivot and Power View
Presenter: Peter Myers

Session 23 (CLD) – Starts at 10:00 GMT
Best Practices and Lessons Learned Using SSIS for Large Scale Azure Data Movement
Presenter: Steven Howard

Session 24 (PD) – Starts at 11:00 GMT
Mentoring for Professional Development
Presenter: Andy Warren

* Please be sure to check our time zone guide for exact times in your area. This 24 Hours of PASS event begins at 12:00 GMT on September 20 and runs for 24 consecutive hours.

Share

By

How To Prevent SELECT * The Evil Way

SELECT * FROM…when administrators see that from developer code, we generally tend to cringe. Why? In a nutshell, it’s terrible on a few fronts. First, typically that SELECT * FROM that gets written (a lot of the times) lacks a WHERE clause. What’s the problem? Well, that pulls back every single row from the table.

Sure, that may not be too bad on a table with a few hundred rows but what about on a table with millions? That could cause a lot of performance problems since you’re trying to read all the data off disk (again, potentially).  Secondly, do you really need all those rows and/or columns? It’s a waste of time and resources to pull back every column from a table if your application is only going to be using a few of them anyways.

So how do we prevent this? Well I recently learned an extremely evil way of preventing such a query. I’d like to start off with, this was NOT my idea. I learned this trick from Adam Jorgensen (Blog | Twitter). I’d also like to add this disclaimer:

DO NOT JUST GO DO THIS IN YOUR PRODUCTION ENVIRONMENT! I am not responsible for whatever evil you turn loose upon your environment. Always test things out in a development environment first and get proper approvals before making any changes.

Pure Evil Method

This method is actually evil in its simplicity. What we’ll be doing is adding a new column to the existing table. The “trick” is that this will be a computed column whose formula will cause an error, specifically a divide by zero error. As shown in screenshot below, create the new column on the table and call it something obvious like ‘DoNotSelectAll’. In the Column Properties window, under the Table Designer section, there is a property called Computed Colum Specification. In the formula section, enter (1/0). Save your table.

image

Now if I try to do my SELECT * on this table, I’ll get this lovely message:

image

Alright, we learned our lesson, now we’ll explicitly name our columns that we need:

image

Now this last query worked but notice how I didn’t put a WHERE clause so it pulls back all rows anyways? Yup, your users can still pull back everything, but at least they’re not doing a SELECT *. Also keep in mind, if you’re used to right-clicking that table in SSMS and selecting TOP 1000, with this column in place it will error for you as well. What are your alternative options?

LESS EVIL METHODS

Abstraction

Another way to control this kind of behavior is by not letting users hit base tables at all. You could create Views that have queries in them that limit rows returned. This way a user can do a SELECT * on a view, but the underlying code of the view itself is limiting row returns.

Depending on your situation, this could work and it could not. If the user needed very specific data returned that wasn’t in that limited pool of results could adversely affect whatever process they’re using the data for.

Another option is wrapping everything in stored procedures and granting users access to executing stored procedures rather than querying tables and views. On the one hand, could be good since you’re encapsulating the code. Users can pass parameters to stored procedures so you could make the queries somewhat dynamic.

Handbrake

In SQL Server 2008 they introduced a feature called the Resource Governor. This feature allows you to throttle resources on queries based on custom functions and groupings you specify. Yes, it’s an Enterprise Edition feature but it can be well worth it if you’re having resource-related issues due to runaway queries.

Now this feature will NOT prevent users from doing SELECT * –type queries, however you can throttle how much resource is allocated toward a query so you can at least control how badly it’ll affect you.

Security

My friend Brian Kelley (Blog | Twitter) will probably appreciate this one. Be stringent with the accesses you grant! Grant users only the accesses they need. Also, ff you don’t want users banging against your transactional systems directly, think about setting up a dedicated/isolated reporting environment and point the users there instead.

The reporting box you stand up doesn’t have to be (necessarily) as beefy as your transactional system and you can setup customized security on that database. This is especially helpful for when the transactional system is a vendor application which you can’t make any modifications to the code.

Do you have any other suggestions/tricks to help prevent crazy user queries? Let’s hear it in the comments!

Share

By

Identify and Alert for Long-Running Agent Jobs

Being a DBA is like being a train conductor. One of the biggest responsibilities is making sure all jobs are running as expected, or making sure “all the trains are running on time” so to speak. As my partner-in-crime Devin Knight (Blog | Twitter) posted earlier, we have come up with a solution to identify and alert for when SQL Agent jobs are running longer than expected.

The need for this solution came from the fact that despite my having alerts for failed agent jobs, we had a process pull a Palin and went rogue on us. The job was supposed to process a cube but since it never failed, we (admins) weren’t notified. The only way we got notified was when a user finally alerted us and said “the cube hasn’t been updated in a couple days, what’s up?”. Sad trombone.

As Devin mentioned in his post the code/solution below is very much a version 1 product so if you have any modifications/suggestions then have at it. We’ve documented in-line so you can figure out what the code is doing. Some caveats here:

  • This solution has been tested/validated on SQL Server 2005 (SP4) and 2008 R2 (SP1).
  • Code requires a table to be created in a database. I’ve setup a DBAdmin database on all servers here for custom scripts for DBAs such as this, Brent Ozar’s Blitz script, Ola Hallengren’s maintenance solution, Adam Machanic’s sp_whoisactive, etc. You can use any database you’d like to keep your scripts in but just be aware of the USE statement at top of this particular code
  • This solution requires that you have Database Mail setup/configured
  • To setup this solution, create an Agent job that runs ever few minutes (we’re using 5) to call this stored procedure
  • FYI, I set the mail profile name to be the same as the server name. One – makes it easy for me to standardize naming conventions across servers. Two – Lets me be lazy and code stuff like I did in the line setting the mail profile name. If your mail profile is set differently, make sure you correct it there.
  • Thresholds – This is documented in code but I’m calling it out anyways. We’ve set it up so that any job whose average runtime is less than 5 minutes, the threshold is average runtime + 10 minutes (e.g. Job runs average of 2 minutes would have an alert threshold of 12 minutes). Anything beyond a 5 minute average runtime is controlled by variable value, with default value of 150% of average runtime. For example, a job that averages 10 minute runtime would have an alert threshold of 15 minutes.
  • If a job triggers an alert, that information is inserted into a table. Subsequent runs of the stored procedure then check the table to see if the alert has already been reported. We did this to avoid having admins emailed every subsequent run of the stored procedure.

CODE (WARNING: This code is currently beta and subject to change as we improve it)

Last script update: 7/24/2012

Change log:

7/12/2012 – Updated code to deal with “phantom” jobs that weren’t really running. Improved logic to handle this. Beware, uses undocumented stored procedure xp_sqlagent_enum_jobs

7/24/2012 - Updated code to v 1.16 to deal with email alerts error. Removing code from blog post and asking folks to instead download directly from download link below. Formatted code on blog makes it messy and a pain when updating.

Download script link – Click here

Got any feedback/comments/criticisms? Let me hear them in the comments!

Share

By

Monday Morning Mistakes: Connecting to Wrong Environments

Today’s Monday Morning Mistake issue is another painfully common one that we all run into at some point. You’ve got some new code given to you by a developer that’s supposed to be ran against QA (because you’re a good DBA and you don’t grant developers access to anything outside of Development). Part of the code drops some existing tables and recreates them with new schemas, objects, etc. You hit the execute button when the blood drains from your face when you realize the instance you were connected to was a production instance!

Issue

You have multiple instances of SQL Server to manage and you want a visual indicator in SQL Server Management Studio (SSMS) to let you quickly know which environment you’re connected to.

Read More

Share

By

BIxPress 3.5–Now With More Awesome!

Pragmatic Works has now released version 3.5 of the award-winning BIxPress software! For those not familiar with BIxPress it’s a tool that helps you develop SSIS/SSAS solutions faster, easily/quickly deploy SSIS packages, monitor performance SSIS packages and much, much more!

So what’s new with 3.5? In addition to now having SQL Server 2012 support (SSIS), you may have noticed in previous released that Reporting Services didn’t get much love. That all changes with this release as we now have incorporated health monitoring of Reporting Services instances called Reporting Performance Monitor!The new dashboard includes Average Runtime for Reports, Longest Running Reports, Report Memory Usage, Average number of Rows, Active Users, et cetera:

BIxPress Reporting Console Dashboard

Another great feature in this release is the Report Deployment Tool. This feature lets you quickly and easily deploy your Reports, Folder Structures, and Data Sources between Reporting Services instances!

BIxPress Report Deploy

One more major update in this release is the update to the SSIS Package Performance monitoring interface. Same great insight, new cleaner interface!

BIxPress Package Performance Report

What are you waiting for? If you already have BIxPress, you can update through the regular process. Don’t have it yet and want to try it out? Download a trial copy today! Also, if you’re a Microsoft MVP don’t forget Pragmatic Works offers NFR licenses to MVPs, so go get your copies today!

Share

By

SQL Server 2012: Biggest Little Core-house

You sure do got a 'purty motherboard...

Now that SQL Server 2012 is generally available to the public, many companies are looking at the new platform and trying to figure out how to move to it and take advantage of all the new cool features. Unfortunately, some folks haven’t noticed/been aware of some of the fine print that came along with this release. I think at this point, it’s safe to say, mostly everyone knows about the change to a core-based licensing model. The part that is now causing major heartache with folks is an issue that Aaron Bertrand (Blog | Twitter) recently brought up in his post that I HIGHLY recommend you go read ‘A cautionary tale about grandfathering CAL licenses in SQL Server 2012 Enterprise’.

To quickly summarize the issue, there’s a 20-core limit in place with Enterprise edition (UPDATE – Thanks for this clarification point Aaron: to be clear, the 20-core restriction *only* applies if you upgrade Server + CAL via SA. With core limit = licensed. In other words if you buy a 64-core Enterprise license, you get to use all 64 cores.)! In a nutshell what that means is if you have a server with 4 8-core processors for a total of 32 cores, and you install SQL Server 2012 on it licensed previously by CAL with SA, SQL Server will only “see”/use 20 of those cores! This is a huge deal and one I’m really surprised has not been addressed more vocally from the user community. I’ve already seen a couple of statements as strong as “based on this, we will seriously start looking at another platform”. My hope is that if enough noise is made from the customer base, Microsoft will at least up that limit similar to how VMware changed their licensing for vSphere 5 based on customer lashback. Have you or your company run into this issue yet? Let me hear your thoughts in the comments.

Yet Another Update: Per Steve Jones’ request in comments, here’s a visual indicator of scenarios and how it could affect you

Edition Licensing today Licensing in 2012 Can I use all my cores?
Enterprise Per Processor (4 procs, 4 cores) License all 16 cores (buy 8 core packs*) Yes
Enterprise Per Processor (4 procs, 8 cores) License all 32 cores (buy 16 core packs) Yes
Enterprise Grandfathered Server (4×4 cores) + CALs (with SA) No core packs purchased due to agreement Yes
Enterprise Grandfathered Server (4×8 cores) + CALs (with SA) No core packs puchased due to agreement No (limit to 20)
Enterprise Server (4×8 cores) + CALs (with SA) License 32 cores (buy 16 core packs) Yes

*Core pack comes in pairs so 16 cores requires 8 packs to be purchased, etc.

Additionally, here’s link to licensing FAQ from Microsoft. The one you want to pay attention to (for this scenario) is the last one:

How will the 20 Core Server limit work for SQL Server 2012?

Existing Enterprise Edition licenses in the Server + CAL licensing model that are upgraded to SQL Server 2012 and beyond will be limited to server deployments with 20 cores or less. This 20 core limit only applies to SQL Server 2012 Enterprise Edition Server licenses in the Server + CAL model and will still require the appropriate number/versions of SQL Server CALs for access.

Addendum: Please note, this blog (or any other) should be your definitive source for licensing information. For that, always ALWAYS contact your local Microsoft rep as they have the details of your specific agreements and options. This post is meant for informational purposes only.

Share

By

Pragmatic Works Software for MVPs

PW_logo_lgThis is just a quick post to remind folks who are current Microsoft MVPs that Pragmatic Works offers NFR licenses of its software! This NFR offer includes:

  • BIxPress – Audit, Notify, Deploy and Manage SSIS Packages
  • BIDocumenter – One Stop Documentation Solution for SQL Server, SSAS, SSRS and SSIS
  • Task Factory – Collection of high performance SSIS components
  • DTSxChange (10-pack) – Profile, Convert and Monitor. One stop DTS to SSIS Migration Solution

To get your licenses simply email our Sales folks and they’ll be happy to get you started! If you’re not an MVP and would like to try our software, you can download trial versions of all our software as well.

Share

By

STOP! Consolidate and Listen

I just wrapped up my 24 Hours of PASS session on consolidation. A big THANK YOU again to everyone in attendance, who kindly put up with my horrendous rapping “skills”! As promised, below is the link to the slide deck. In the presenter’s notes you’ll find some good links and resources for consolidation.

Download link: STOP_Consolidate_and_Listen_24HOP (ZIP file)

If you have any additional questions around consolidation, virtualization or my general sanity feel free to leave them in the comments section below, thanks!

Share

By

Monday Morning Mistakes: Not Setting Memory Limits

Welcome back to another addition of Monday Morning Mistakes series. Today’s issue is one I tend to run into quite often with clients and is an important topic to know about as a database administrator. Without further ado, let’s get to our issue

Issue

You have SQL Server database engine installed on a system with other services such as Analysis Services, Reporting Services and/ or Integration Services and you constantly seem to run out of memory. Restarting service seems to fix the issue temporarily but some time later the same problem returns.

Read More

Share