Russ Thomas – SQL Judo

The Art of SQL Server Database Administration, Development, and Career Skills for the Technically Minded

TSQL Tue #79: When the Memory is All Gone

T-SQL TuesdayI used to blog.  Today I ashamedly realized it’s been nearly six months since my last post.  That’s pretty pathetic.  I was doing a couple a month there for a nearly 2 years and then nada.  No good excuses, just life.  But!, there is nothing like a good TSQL2SDay topic to get me back on the wagon.  Michael J Swart sets the tone this month in a well timed topic coinciding with the release of SQL Server 2016.

There are a lot of cool features I could pick from.  The ones that are freshest in my mind however are some of the major enhancements to SQL Server In-Memory OLTP.

I actually just spent several weeks authoring a pluralsight course on this very topic so I’ve got a lot of material bouncing around my skull. #selfpromotion #shameless #whyIhaventbeenblogging #ifyouhavepluralsightyoushouldgocheckitout

There are a lot of enhancements to IMOLTP, so many in fact, that in my opinion this is now a feature that could be considered in far more than just the handful of extreme edge cases like it possibly was viewed in 2014.  One of these is the addition of  support for max length fields such as varchar(max), nvarchar(max), varbinary(max).

That’s all well and good, but a major concern for a lot of people exploring IMOLTP is… what if I run out of memory?  The addition of these types probably increases that concern for some.  In 2014 there was a handful of equations I could use to figure out max memory potential of each row and get a decent idea of max rows I was willing to support, but now, a single max type row could sink it all.

So the true theme of this post is, what happens when you run your IMOLTP enhanced database out of memory.  How do you figure out the culprit if it does? How bad is it for anything else on that database or server? … and what else is there to be aware of.

That's a lot of memory!!

The good news is that SQL Server respects your wishes on max and min memory even when using In-Memory OLTP tables.  The space required for these tables must exist in SQL Server’s memory space for things to work  – however – with plenty left over for anything else you need SQL Server to do.  The moral of the story is you should pre-think the max memory setting.  This is not a good area for gut feel calculations.  If you let SQL Server take more memory than is physically available on the server, it is possible that you could crash the OS.

If however, SQL Server is dutifully leaving enough memory for the OS no matter what else is going on.  Chances are if you do run your IMOLTP tables past capacity, what you’ll see is this:

Msg 701, Level 17, State 167, Line < something >
There is insufficient system memory in resource pool ‘default’ to run this query.

The transaction that hit this error will fail and you will now be somewhere near the limit of SQL Server’s available main memory.  At this point, you have some decisions to make.  1. Do you increase memory?  (not all that easy)  2. Do you not do what you were trying to do and instead go eat a sandwich?  3. Do you go find out where all your memory is and make sure your users aren’t suffering low memory effects themselves?

Adding up where all your IMOLTP memory has gone is wicked easy.  Just query sys.dm_xtp_table_memory_stats.  It will add up all the memory you have in tables, and indexes, and with a call to object_name() you can easily name the tables from their IDs so you can go hound them out yourself.  Note, all sizes are in KB.

 select object_name(ms.object_id) as memory_optimized_table, ms.*
 from sys.dm_db_xtp_table_memory_stats ms

You’ll want to get right on this, because if IMOLTP has eaten up most of your available memory, then any other process on SQL Server is probably suffering the consequences:  slowness, user complaints, terrible page life expectancy, bowel irritation, ulcers, and increased risk of liver damage.

Hopefully you can clean up some indexes, tables, or other objects and get things healthy enough for some more long term decision making.

Let’s say it’s worse than that.  In extreme cases – you can make your SQL Server completely inaccessible.  Users might start experiencing error messages like the following:

The client was unable to establish a connection because of an error during connection initialization process before login.  Possible causes include … <a whole bunch of causes> … or there was a resource limitation (insufficient memory or maximum allowed connections)

Chances are if you run into this… you might have to resort to a dedicated admin account or other more drastic measures to even get logged into your box and address your low memory issue.

This second possibility tends to scare people away from In-Memory OLTP, but if you really think about it, this is not much different than running tempDB, one of your user databases, or a log drive out of space.  At that point on any of these storage mediums, DBA type gymnastics will be required.  It’s just that memory was an area where we typically didn’t have to do that type of growth planning before.

Anything else to be aware of? … well, what happens if you take a backup of a database with memory optimized durable tables and attempt to restore them on a server that has less memory than is needed.

The answer is this … You’ll have a database perpetually In-Recovery with no where to go until enough resources are established for it to be restored and you try again.

The parting message is this.  Calculating your IMOLTP memory needs should be a predictable operation and in fact is pretty basic math when you know how it works.  Pssst, don’t forget to include the indexes.  No differently than planning disk space for your log, mdf, temp and other server drives.

If you find yourself needing a max data type and can’t predict how high it will go, maybe you should re-think IMOLTP for that specific use case – even though it’s now available in 2016.

Appendix:

Want a quick way to run your own server out of memory using IMOLTP and a max length field for test purposes?  Try this on a schema only table where myval is a varchar(max) and mykey is the primary key:

declare @badidea int = 0
declare @count int = 1
declare @text varchar(max) = ''

-- make a really big block of text
while @count < 1000
begin
   set @text = @text + 
        'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor ' + 
        'incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud ' +
        'exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'

    set @count = @count + 1;
end

set @count = 1;

-- insert it over and over until we run out of memory
while @badidea < 1
begin
    insert into sillystuff (mykey, myval) values (@count, @text)
    set @count = @count + 1;
end

 

Advertisement

One comment on “TSQL Tue #79: When the Memory is All Gone

  1. Pingback: T-SQL Tuesday #079 Roundup: It’s 2016! | Michael J. Swart

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Information

This entry was posted on June 14, 2016 by in Career Skills, DMVs and System Info, T-Sql Tuesday.
%d bloggers like this: