Month: March 2019

A Ye Olde Way to Break Stretch Database

A Ye Olde Way to Break Stretch Database

I’m really excited to be publishing a real live technical blog post! I have fun blogging about speaking, travel for speaking, and racing, but I’m looking forward to my new professional opportunities lending themselves to more technical blogs on a variety of data platform topics. I’m really looking forward to blogging on some more cutting-edge data platform topics as well, but for today I have a bit of a blog about a SQL Server feature that doesn’t get a lot of love. We can debate all the reasons for why stretch database hasn’t seen a lot of use, but it’s still there and still an interesting feature to play with.

Recently I had the pleasure of putting together a proof-of-concept for a customer who was looking into using Stretch Database to offload a large amount of archive data into Azure rather than their own datacenter(s). This particular application had some tables that were created nearly 15 years ago (a fact that will become relevant shortly). Enabling Stretch Database is covered well in the Microsoft document here so I’ll skip straight to the error I was getting.

I noticed when I enabled Stretch Database (via the wizard) some tables were starting to migrate completely normally and some were not. I began querying sys.dm_db_rda_migration_status (a DMV used for monitoring Stretch Database operations) and saw that all of the tables that were not moving were registering error 7320 in the error_number column of that DMV. A quick search on the error revealed the following text of the generic error: Cannot execute the query “%ls” against OLE DB provider “%ls” for linked server “%ls”. %ls. Not very helpful, is it? It was at this point we contacted Microsoft support because we thought we’d found a bug – some tables were migrating correctly and some weren’t, even though they’d all been configured for stretch the same way via the wizard. Our support ticket went between Azure SQL Database support and SQL Server support before we were connected with a solid engineer who assisted in troubleshooting but was unable to find anything incorrect in the tables or the way Stretch Database was configured.

We were at a loss but he recommended another review of the SQL Server error log to see if there was anything untoward there at all that we had missed in our previous reviews. What I noticed in that final review was that we were occasionally seeing errors relating to ANSI_NULLS settings in the error log. Cross-referencing that with the times the Stretch Database migration process was attempting to kick off on the erroring tables revealed that something to do with ANSI_NULLS was breaking Stretch Database migration. But what?

I relayed this information to the support engineer and he contacted the development team to ask what ANSI_NULLS had to do with Stretch Database migration throwing an error related to linked servers – an odd chain of errors. They replied and said that if a table was created with the ANSI_NULLS setting set to OFF then the Stretch Database migration process will throw that linked server error. These tables, due to their age I mentioned earlier, were created with that setting set to OFF. Problem solved! (Although changing the ANSI_NULLS setting for hundreds of related tables is its own set of challenges that should be its very own blog post). In better news, the support engineer submitted a request to the docs team to clearly document this previously undocumented issue.

I realize, given the minimal adoption of Stretch Database, this blog post may be targeted towards a limited audience. That said, if it helps just one person avoid the issues that I ran into and save hours or days of work to figure this out, it is a huge help to somebody! If that somebody is you, cheers!

As always, thanks for reading and come back soon – there will be much more content here as 2019 rolls on.

Speaking of Blogging…Here Is Where I’m Speaking Soon

Speaking of Blogging…Here Is Where I’m Speaking Soon

For the folks that follow my blog and have noticed it’s been a bit quiet around here – apologies. Personally and professionally, the last 3 months or so have been quite a whirlwind but I’m still here and the blog is too. Tomorrow there will be a real live technical post here on the site, but for today I wanted to get a post up noting my next three speaking engagements.

Saturday, March 30 I will be speaking at SQL Saturday Cincinnati. I will be presenting my fairly new session “Azure-d Availability: Scaling SQL Server to the Cloud” and I’m excited to be speaking at one of my local-ish SQL Saturdays. The link to my session is here and the registration button for the event is in upper right corner of that page.

Following that, I’ll be presenting to Midlands PASS in Columbia, SC at 5:30 PM on Tuesday night. I went to college at Clemson so, in some ways, Columbia is a bit like home. If you’re familiar with the Clemson/South Carolina rivalry, however, in some ways it’s not. 🙂 Either way I’m looking forward to speaking to the group about Azure Logic Apps, Azure SQL DB, and sentiment analysis using Azure Cognitive Services as long as they don’t boo me out of there! The session link is here for those interested and in the area.

Finally, I’ll be crossing the Atlantic once again to speak at Data in Devon in the southwest of England. This is the event formerly known as SQL Saturday Exeter and I’m thrilled to be presenting my “Where Should My Data Live (and Why)?” session to an international audience. If you are at all close to the area or fancy a weekend in southwest England with some excellent training (and my session), the registration link is here.

Thanks for reading and I hope I see you at one of these events. Thanks to IDERA Software for making some of this travel possible. I hope you’ll come back tomorrow (March 29th) for my first technical blog in a while.