Author: sqlatspeed

Summer Speaking, Then Some Soccer (er, Football)

Summer Speaking, Then Some Soccer (er, Football)

Greetings from Cork, Ireland! I wanted to get a quick post out about my next 3-4 speaking opportunities before I take a little late summer/early fall break from speaking (for a very good reason). We’ll get to that at the end.

First of all, I’m here in Cork, Ireland to speak at SQL Saturday Ireland in a couple of days (June 9th, to be exact). I’m really excited about this opportunity, as it’s the second international SQL Saturday I’ve been able to do this year. I am incredibly grateful for these opportunities to speak to new audiences and will do my very best to not let down the attendees and organizers. Cork has been wonderful in the few hours I’ve been here already and I look forward to a few more hours of exploring Cork tomorrow before the speaker get-together ramps up in the evening. If you haven’t registered for #SQLSATCORK yet, you can do so here.

Following Saturday’s event here in Ireland, I’m thrilled to announce that I’ve been selected to two SQL Saturday events in July which are much closer to home than the roughly 4000 miles it is to my house from where I sit here in Cork. I’ll be speaking at SQL Saturday Louisville on July 21st and SQL Saturday Columbus on July 28th.

Louisville was the first SQL Saturday to ever select me and remain grateful to Chris, John, Mala and the team for that initial selection and the community path it’s put me from that point forward. Beyond my personal feelings for this event and the group behind it, rest assured that Louisville is always a fantastic event with a great speaker list top to bottom despite my repeated selections! If you haven’t registered for the event, click here. I’d also like to plug the pre-conference session from two ex-colleagues of mine (and current Microsoft employees), Brad Ball and Josh Luedeman. These guys are doing very cool, cutting-edge stuff in Azure to really make your data warehouse data come alive to data consumers. Their dovetailing of data visualizations, sentiment analysis, and modern DW techniques is worth a day of your time – register for it here.

This will be my first trip to Columbus for SQL Saturday and I’m excited about that because A) I lived in Westerville for a few years when I was a kid and B) I’m a frequent visitor to Columbus Crew matches with my son so it will be fun to present up there. I’ll also be getting together with my fellow FGE Professional Sports Analytics founder (George Bryant) to watch some football after the event at a Crew pub, so the whole day should be a blast from start to finish. I haven’t been able to speak too much about what George and I are doing with FGE, but if you’re interested in what we’re doing, click here. We’re a bit light on the details there for now, but rest assured there are many blog posts coming from the work he and I are doing. Good bit of our work is under layers of legally enforced silence – for now. That will soon change and I’m eager to share it with you when I can!

Next, I’ve submitted individual sessions to SQL Saturday Indianapolis and I’m eagerly awaiting that schedule to see if I’ve been selected. Even if I miss out on an individual session, I do know that I will be doing a pre-con on Friday, August 10 with my friend and co-worker Sean Werick. It’s titled “Modernize your Data Warehouse” and tickets are available here. It’s always fun to be in Indy even when I’m not at the Speedway so I hope I’ll see you there.

Finally, I’ve submitted to the Atlanta Azure DataFest, held August 16th and 17th at the Microsoft Technology Center in Alpharetta. I’m hoping I am selected for that event because I’ve submitted my Cognitive Services, Azure Logic Apps, sentiment analysis, and soccer/football talk (i.e. the Premier League Mood Table session). That’s received wonderful response where I’ve given it across the USA and the world but it doesn’t always fit neatly into some PASS events so I’m hoping it finds a home at Azure DataFest.

Following that, I won’t be at any SQL Saturdays for a couple months so I can be home on the weekends to see my son play with his U10 club soccer (to some readers)/football (to other readers) team representing Lexington FC. I’m incredibly proud of the level he is able to play the sport that he loves, and despite my woeful lack of talent in that area, I can’t wait to watch him take to the field this fall. Thanks for reading – hopefully more to come soon!

(Halfway) Around the World in 18 Days

(Halfway) Around the World in 18 Days

Greetings! I’m excited for some of the technical posts that I’m working on, but before I’m able to publish those I wanted to share the details of my speaking schedule for May. I’m incredibly fortunate to have these speaking opportunities, and I’m incredibly excited to share them with you and to share my presentations with the attendees at these events!

My epic May begins this weekend with SQL Saturday Jacksonville on May 5th. I’m looking forward to catching up with some friends in the area, but I’m also looking forward to my first presentation in Florida! I’ll be presenting my “How to Keep Your Database Servers Out of the News” session. I really enjoy this session because it lends itself to a lot of interactivity with the group as we talk through various challenges people have had and the questions those challenges bring to their mind. If you’re attending, I look forward to seeing you and hearing your questions. If you’re not attending, click here to register and I’ll see you there!

The following weekend, on May 12th, I’ll be presenting at SQL Saturday Finland in Helsinki. It is an understatement to say that I’m excited for this one. My wife has requested that I bring Kimi Raikkonen home with me, and while I’m pretty sure she’s going to be disappointed in my failure to do that, I’m thrilled that I’m meeting my goal by speaking there! I set a personal goal to do at least one international presentation in 2018 and I’m incredibly grateful to the organizers of SQL Saturday Finland for selecting my session on “New Features and New Speed in SQL Server 2016 (and 2017) Always On Availability Groups”. I last presented this session at SQL Saturday Cleveland in February and it went really well and seemed to help some folks with challenges they were having, so I’m excited to bring this one to an international audience. If you’d like to register, click here to do that.

Following my presentation in Finland, I’m hopping a quick 3-hour flight to England to present “Feelings Quantified – Ranking Football Clubs By Supporter Sentiment” to Tech Nottingham. I’m thrilled that I was able to work this out with the organizers and they’ve been absolutely wonderful to me as we’ve worked to get this setup. This will be the second time I present on the Azure Logic Apps and Azure SQL DB guts of the Men in Blazers Mood Table I blogged about here in December and the first time it will be to a crowd who calls it football instead of soccer. ūüôā When I arrive in London on 5/13 I’m taking a few hours out, before hopping the train to Nottingham, to catch Tottenham’s (my favorite English football club) final match of the season and final match at Wembley Stadium before moving to their new stadium in the fall, so it’s going to be a soccer-ful/football-ful couple of days! Come On You Spurs! If you’re interested in learning more about this event, information can be found here.

After that journey, I head back stateside for a couple days off in New York City before presenting my “Data To Impress Those That Sign The Checks – Azure Logic Apps, Social Media, and Sentiment Analysis” session at SQL Saturday New York City. This session is the American-ized version of my mood table presentation (less soccer emphasis and slightly more technical focus) so it will be interesting for me to present both versions of this talk a few days apart. Also, it is no exaggeration to say that attending SQL Saturday NYC in 2015 changed the course of my career, so I definitely encourage you to register. Click here to do that. The organizers do a great job with this event, it’s in a great city, and I’m very appreciative of being invited to speak at an event that’s been so significant in my professional growth. I hope to see you there!

Lastly, I wrap up my journey right where I’m sitting as I finish this blog: my home office. IDERA Software has been kind enough to invite me to present a Geek Sync on 5/23 with my “Where Should My Data Live (and Why)?”. This session is great for data professionals in an environment where they’re being encouraged to expand the organization’s data estate to the cloud. It offers several real-world examples of how cloud and on-premises deployments can work together and complement each other. We also go over some pros and cons of the cloud vs. on-premises and dispel some myths as well. I hope to “see” you there. Click here to register and hear my run my mouth for an hour on May 23rd!

I know I keep saying it, but I am grateful to the organizers of all of these events for allowing me to speak to their groups. I can’t wait to meet #sqlfamily from other parts of the world, see places I’ve never been, and hopefully share a little knowledge along the way. Thanks for reading and hope to see you at one of these events!

I’m Speaking at SQL Saturday Cincinnati!

I’m Speaking at SQL Saturday Cincinnati!

It’s been a busy month since I last blogged for T-SQL Tuesday #99, but I’m hopeful that my blogging will get a bit more regular once I get through this weekend – and an exciting weekend it will be!

I’ve been fortunate enough to be selected to give both a full-day pre-conference session and a regular session at this weekend’s SQL Saturday Cincinnati. The full-day pre-conference session is on Friday, 3/16/18, and it’s titled “Modernize Your Data Warehouse with Big Data” and I’m presenting it with my colleague Warren Sifre (t). I’m really looking forward to my initial effort at a full-day pre-con!

My regular session is a veteran of a few SQL Saturdays now and it always generates good discussion in the room so I’m hoping for a good turnout and good questions! It’s titled “How to Keep Your Database Servers Out of the News” and I’m looking forward to presenting it at this first edition of SQL Saturday Cincinnati on Saturday, 3/17/18.

I believe the event has reached capacity, but just in case some folks have cancelled, go ahead and register for it here. I hope to see you there!

I’m also hoping to be able to announce soon several speaking engagements in the May/June timeframe. I can’t do so yet, but I’m looking forward to sharing those with my readers, so watch this space if you’re interested in where in the world I’ll be later this spring!

T-SQL Tuesday 99: Racing Brings Me #sqlibrium

T-SQL Tuesday 99: Racing Brings Me #sqlibrium

Thanks to Aaron Bertrand (b|t) for hosting this month’s edition of T-SQL Tuesday, the 99th in the blog party series, and for an interesting topic choice for this edition. You can find Aaron’s T-SQL Tuesday #99 introductory post here, but Aaron gave us a choice this time around: share a passion of ours with the SQL community or write about a favorite/most annoying T-SQL bad habit. While I gave some thought to the technical post, I couldn’t turn down an opportunity to talk about my love for racing and how much I enjoy getting to actually drive a race car a few times each year. Since thinking about, talking about, and planning for racing does help bring some balance to my life, #sqlibrium as Drew coined the term, let’s talk for a few minutes about how cool (and yes, relaxing) it is to drive race cars once in a while.

“There are only three sports: bullfighting, motor racing, and mountaineering; all the rest are merely games.” – Ernest Hemingway

I was a big enough racing dork when I was a kid that I had a t-shirt with this on it when I was in elementary school. I honestly don’t remember a time in my life when I didn’t want to look at, read about, or drive race cars. However, if this post turns into “Matt waxes poetically about racing”, it will be about 5,000 words long and incredibly boring to everybody but me. Put much more simply, while a lot of people look for relaxation from a good hike or a relaxing day on the beach, my beach is at a racetrack. Whether I’m watching the cars, working on them, or driving them, it has a way of clearing my head unlike anywhere else. For the sake of brevity(-ish), I’ll focus the rest of this blog on my on-track exploits, such as they are.

File_003

As you can see, we take this racing stuff quite seriously. The picture is of me waiting on the grid at Indianapolis Motor Speedway in June 2017 before taking to the track for the first of two races. The grid marshals thought it would be funny to give us silly umbrellas to block the sun while we waited – and it was.

That said, this picture does a decent job of showing what my Formula First looks like up close. For those that are interested in the technical specs (which is likely very few of you), the basics are that Formula Firsts are 1600cc air-cooled Volkswagen engines mated to purpose-built open wheel chassis riding on Hoosier R60 tires. Hoosier has been a great sponsor of our U.S. Formula First Championship, which is a 5-6 weekend series that is currently in its 12th year of competition this year. We run at great tracks all over the eastern half of the U.S., from Road America in Wisconsin, to Watkins Glen in New York, to Road Atlanta in Georgia. If you’d like to read more information on the series (and see some great videos), the link is here.

Now that we’ve covered the basics of the car and the series that I race in, you’re probably wondering “what cool stuff have you done in these cars, Matt?”. Now, some of you asking that question may think cool stuff is “what have you won?” and others may think cool stuff is “what have you crashed into?”. I’ll cover both angles before we wrap up this blog, but if you’re just here for the crashes, here’s a picture of a crash I just missed at Indy last year (thanks to Brian Schell for the image).

indy_crash_edited

First, what have I won? I’ve been fortunate enough to win trophies and take podiums (finishing in the top 3) at places like Indy, Watkins Glen, and Road America. Road America (in Elkhart Lake, WI) was my favorite race track (other than Indy) growing up and it still is, which makes this next story particularly frustrating even though it happened 12 years ago. After doing a couple weekends in 2005, I committed to running a full Formula First season in 2006. I went to Road America in the top 3 in championship points and was looking to have a great weekend. I qualified 2nd for the our race on Saturday but, near the end of the first lap, the right-front suspension spectacularly came apart, ending my day quite early and giving my right hand a gnarly bruise to boot. I went into Sunday morning’s qualifying hopeful but still frustrated and qualified 5th – then a clutch problem reared its ugly head towards the end of the session. That sent the crew into a massive thrash to get the clutch replaced, an effort that was completed just minutes before we had to head to grid for the race.

Once the race started, the car was really good. I could run comfortably in the draft and started picking off cars and working my way up the order. With 3 laps to go, I passed for the lead and was leading for the first time in my career! The other driver and I traded the lead (and fast laps) back and forth over those last few laps and, on the last lap, I exited the final turn (turn 14) in the lead. I didn’t get the best launch off the corner, though, and the other driver had a run on me. I put on a within-the-rules blocking move but ended up losing the race by roughly the length of the nose of the car. I was crushed, especially as my wife and dad were there to see it. The picture below was taken just after the race while the top 3 finishers waited in line to make sure our cars met minimum weight. I’m still in the car chatting with the guy who beat me. My wife, as you can see, was not thrilled with the loss (she’s a bit competitive)!

roadamerica_2006_postrace

Our series’ next race that season was at Nelson Ledges in Ohio, and after a solid finish during the Saturday race, I was involved in a nasty crash near the end of Sunday qualifying. I was hit from behind by another driver after sliding through the first turn, and that contact resulted in his left side tires bouncing off my roll bar and then my helmet and him flipping end over end numerous times. The impact cracked the shell of my helmet, so I was incredibly fortunate to only be checked out for a concussion and treated for bruises and scrapes – that could have been far, far worse and it really put the previous race’s frustration into perspective. It did not, though, knock any sense into me and I’ve continued racing through the years (except for a break when the kids were born) as time and budget allowed.

I could go on for hours, but this ~1000 words is long enough. As I said, the racetrack is my beach. I love it and it must be in my blood, because I don’t remember not loving it. Based on the picture below (taken after my 3rd place finish at Indy in 2017), one of my kids might end up writing this same blog post in several years’ time. Thanks for reading – hopefully I’ll see you at the track.

indy_podium_kids

tsql2sday-300x300

 

They Let An Impostor Speak at PASS Summit?

They Let An Impostor Speak at PASS Summit?

Earlier this week the folks at PASS reached out to last year’s speakers asking us to share a story of how speaking at PASS impacted us professionally or personally. The first idea that popped into my head was to blog about how speaking at PASS Summit lends you a bit of unique professional credibility in the SQL Server/Microsoft Data Platform world – because it absolutely does. That said, I figured a lot of folks would blog, tweet, or make videos around exactly that subject and likely handle it more creatively that I would have. So, while speaking at PASS Summit has definitely had a positive impact on me professionally, I decided to blog about what I believe the biggest impact of my speaking at PASS Summit 2017 has been – a weapon I can use to battle impostor syndrome. I saw myself as the impostor I mentioned in the title of this blog.

If you’re unfamiliar with impostor syndrome, it is described (via Wikipedia) as “…a concept describing individuals who are marked by an inability to¬†internalize¬†their accomplishments and a persistent fear of being exposed as a “fraud””. As I’ve been fortunate enough to get to know more and more people in the SQL community over the last few years, I realize that many, if not most, speakers suffer from impostor syndrome. This is true for both first-time speakers and even speakers who would all consider “rockstars” in the community. I remember sitting in a speaker room at a SQL Saturday last year and hearing one of the presenters wonder aloud (as they left the room to give their session) “Is this the day these people figure out I have no idea what I’m talking about?”. I’ve certainly battled this and, while it’s gratifying to realize others struggle with this, that’s not necessarily particular helpful to keeping that “impostor” voice quiet!

When I received the email that I had been selected to speak at PASS Summit 2017 I was sitting with my kids as they finished some homework. They were initially quite alarmed when I screamed and ran down the hallway with my arms in the air. Once I came back to them and explained why I was so excited they looked at me with blank stares for a while until my son said “so people are actually going to pay to hear you talk?”. He was shocked! That was also when it really began to sink in for me what a big deal this was going to be.

While I am always incredibly gratified and humbled when I’m selected to speak at any event, my previous speaking experience has been confined to SQL Saturdays and user groups. Those are wonderful opportunities but, as I often joke at the beginning of my sessions, “you are guaranteed to get your money’s worth from me” because those events are free to attend. If I disappointed an attendee at one of those (and I’m sure I have), I haven’t cost them any money.

While people often describe Summit as a “massive SQL Saturday”, the fact that people were spending their own (or their company’s) hard-earned money to attend ratcheted up the pressure for me. That said, once the talk was complete and I had fielded questions (and some compliments) from the folks that attended, that pressure transformed into some measure of validation. The fact that people spent money to be there and that 60-70 of them took the time to attend and applaud my talk was validating and invigorating to me. Now, when that impostor syndrome voice on my head gets louder, I can remind it that I spoke at PASS Summit. And I hope do it again to keep that voice at bay!

Create a Read-only Routing List: A Quick How-to

Create a Read-only Routing List: A Quick How-to

Last week, IDERA¬†was kind enough to invite me to host a Geek Sync Webinar for them. I presented this particular Geek Sync as a webinar version of my Top 5 Tips to Keep Always On Always Humming session that I’ve done at a few SQL Saturdays in the past. Although brevity forces the slightly cutesy title, the session is specific to Always On Availability Groups and my top tips to keep them running after you’ve set them up. Essentially, it’s a list of mistakes I’ve made (or colleagues of mine I’ve made), and I don’t want you to trip over the same things that we did!

It was a good, interactive audience and I received a few questions that I simply couldn’t handle in the allotted hour for the Geek Sync. While I plan an additional blog early in the New Year to discuss those, I did get a few specific questions on the basics of setting up a read-only routing list and I committed to those people that I would get a blog out before my vacation began that covered the basics of setting up a read-only routing list.

Before we dive in, this blog assumes that you have setup a functional Always On Availability Group with a listener. If you are struggling with that, feel free to reach out to me via Twitter and we’ll take a look at it and see what the best way is for me to help you. If that’s all working for you, though, all that is left is to follow the instructions below and you’ll have a functional read-only routing list.

We need to first identify the read-only routing URL that we will be using in our script. That URL follows this format:¬†‘TCP://<FQDN of your server>:port # used by SQL instance’. For example, a read-only routing URL for one of my test servers may look like¬†‘TCP://TESTSERVER01.MGTEST.DEMOS:1433’. If your server is setup according to defaults (or you know that the instance is only listening on one port and you know that port number), these instructions will be sufficient. If you have an instance whose setup is more complicated than this, I have always found the script contained in this Microsoft blog handy. That script, in very detailed fashion, walks you through how to calculate your read-only routing URL in a wide variety of configurations.

Now that we’ve determined our read-only routing URL, we’re ready to create our read-only routing list. To keep this blog simple and readable, I’ll create a read-only routing list for a two-node Always On Availability Group (meaning the read-only connections will be pointed at the secondary under normal operations). Since, in this blog, we’re doing all of this via T-SQL, I’ve pasted a sample script below that 1) ensures that the secondary replica allows read-only connections, 2) establishes the read-only routing URL, and 3) creates the read-only routing list. My sample script (which sets up routing lists for scenarios when either 01 or 02 is the primary) is pasted below. Following the script, I have a short note about setting up round-robin load balancing via a read-only routing list (this is a SQL 2016(+) only feature) and a holiday sendoff!

ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER01′ WITH
(SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY));
ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER01′ WITH
(SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N’TCP://TESTSERVER01.MGTEST.DEMOS:1433′));
GO

ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER02′ WITH
(SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY));
ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER02′ WITH
(SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N’TCP://TESTSERVER02.MGTEST.DEMOS:1433′));
GO

ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER01′ WITH
(PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=((‘TESTSERVER02’, ‘TESTSERVER01’))));
GO

ALTER AVAILABILITY GROUP [TESTAG01]
MODIFY REPLICA ON
N’TESTSERVER02′ WITH
(PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=((‘TESTSERVER01’, ‘TESTSERVER02’))));
GO

If you’ve made it this far, thanks for reading! As I mentioned, the examples above are simple and meant to provide a basic understanding of how to setup a read-only routing list. That said, SQL Server 2016 and later gives you the ability to provide round-robin load balancing via a read-only routing list. If I were to extend my sample AG above to four nodes and I wanted three of them to be load-balanced, it might look something like this:¬†READ_ONLY_ROUTING_LIST = ((‘TESTSERVER01′,’TESTSERVER02’, ‘TESTSERVER03’), ‘TESTSERVER04’). If you note the extra parentheses to the left of TESTSERVER01 and to the right of TESTSERVER03, that is all it takes for the listener to understand that it is supposed to load balance read-only connections among those three nodes.

As I speak about, Always On Availability Groups can be a very complex topic to write about and discuss. This post is, by no means, meant to be an exhaustive exploration of read-only routing lists or availability groups. It is meant to be, as requested by the Geek Sync attendees, a quick view into setting up a basic read-only routing list. Hopefully it achieved that goal!

My blog will be quiet over the next couple weeks as I celebrate the holidays with family and friends. I sincerely hope all of you are able to relax and recharge as well this holiday season. We’ll be speaking a lot more in 2018, I believe! Cheers and Happy New Year!

 

Men In Blazers PL Mood Table – How It Works

Men In Blazers PL Mood Table – How It Works

Odds are, if you’re reading this, you represent the part of the Venn diagram where my Microsoft data platform buddies and Premier League soccer fans intersect or you clicked the link in today’s Men In Blazers Raven because you were curious how the Premier League mood table works. Rest assured, if you can think of different ways to do this and tweaks to make, I’d love to hear about them in the comments (unless you’re an Arsenal supporter – mostly kidding)! I’m passionate about data solutions and am always trying to stay abreast of rapidly changing platforms and technologies. Feel free to reach out to correct me, congratulate me, or just to connect with me professionally!

Before we go any farther, I owe my buddy Bradley Ball (b|t) a tip of the cap and several cold beverages for the original blog that provided the inspiration behind the Mood Table. If you’re interested in a deeper dive than this high-level overview, click through to my deep dive post here or his original post here.

First off, the foundation of the mood table begins with creating an Azure Logic App¬†(basically a workflow container) for each of the twenty Premier League clubs. Those logic apps contain a Twitter connector that, when active (more on that later), searches Twitter for tweets containing each club’s official Twitter handle. Those tweets are collected by the Twitter connector and fed into the Sentiment Analysis function of the Cognitive Services API found within Azure. The Sentiment Analysis function adds a sentiment score to the tweet (0 is completely negative and 1 is completely positive) and passes it on to the final couple of steps in the process.

Once the sentiment score of the tweet is calculated, the tweet and its corresponding data (text, score, originating account, date created, etc.) is fed into a loop container that separates each club handle in the tweet and connects the tweet to every club tagged within the tweet. Finally, all of this data is stored in an Azure SQL Database (basically a database in the cloud) for persistent storage, querying, and analysis.

It is important to point out that each club’s logic app runs for a 10-minute window around the end of each match. It captures roughly the last 2-3 minutes of match action as well as the 7-8 minutes immediately following the full time whistle. As Rog told me, the intention is to capture the sentiment of tweets akin to when you turn to your mates (or buddies, in ‘Merican) at the pub and offer your instant analysis on your club’s success, failure, or utter mediocrity in the match.

Before I get into a brief description of the query I use to provide this critical information to Rog and Davo, I do want to mention that there was originally a real-time component of the mood table as well. I had written a Power BI report that showed a live table throughout the matches as tweets and their data streamed into the Power BI streaming dataset. After chatting with Rog, though, that was deemed far too optimal for MIB (not to mention quite pricey for my personal Azure account) and we settled on the current sentiment information surrounding the final whistle. It may make an appearance at some point, though!

Once all this Azure coolness has done the heavy lifting, the actual query that compiles the results is fairly straightforward. It takes the average of the sentiment scores tagged to a specific club and then ranks those clubs (and their corresponding average scores) from the highest average to the lowest average. The only data it excludes is tweets that contain only an image (as the sentiment analysis cannot score images – yet). Sadly, your hilarious soccer GIFs are not yet understood by the mood table.

In a nutshell, that’s it. I look forward to comments, questions, and connections. As I mentioned, I have a much more detailed technical deep dive into that that you can read here. This has been a blast to work on and we have some potentially interesting plans for it moving forward. Finally, there is no truth to the rumor that Rog is paying the Russians to create bots to elevate Everton to the top spot. That is just the natural optimism and enthusiasm of Evertonians shining through! To the football…

Men In Blazers Premier League Mood Table – Deep Dive

Men In Blazers Premier League Mood Table – Deep Dive

Welcome nerdy GFOPs! If you’ve made it this far, you’re likely prepared for the nerdiness (and lengthy post) that you are about to dive into. While our assumption was that some GFOPs would be curious, at a high level, about how the Premier League mood table works, my hope was that some GFOPs who work in analytics, data science, or anywhere in the data platform realm would be curious enough about this to dig into what I’ve done, discuss it, and hopefully build upon it to do cool things with the mood table or in our professional work. With that greeting, let me get to a couple acknowledgements and then we’ll get nerdy.

There are two blogs that provided much of the foundation for the mood table. First, the inspiration for the mood table came from a blog written by a friend/former boss of mine that now works at Microsoft, Brad Ball (b|t). Please go check out his blog post to better understand the foundation of the mood table. The second blog that proved to be an immense help can be found here. It was written by Toon Vanhoutte. I don’t know Toon, but I owe him a beer or three for a wonderful blog post laying out, in necessary detail, how to schedule the execution of the Azure Logic Apps. Without being able to schedule the enabling and disabling of these logic apps, the mood table would likely be too expensive to run regularly. So, cheers to you, Brad and Toon! Let’s get into the guts of the mood table.

The foundation of the mood table is an Azure Logic App created for each of the 20 clubs. Because of character limits on some of the object names, I was forced to use creative abbreviations for some of the clubs. Apologies for the name below, Arsenal supporters, but I am a Spurs supporter after all! This screenshot shows what is inside each logic app – I’ll go into more detail about each component below the screenshot.

logic_app_1

The first component (labeled “When a new tweet is posted”) is the Twitter connector itself. It signs into Twitter and searches for tweets containing the club’s official Twitter handle. It passes those tweets into the second component (labeled “Detect Sentiment”). That is where the sentiment analysis actually takes place. That component analyzes the text (not images – yet) of the tweet and scores it from 0 (most negative) to 1 (most positive). The sentiment score is added to that dataset and passed to the third component – the for each loop container.

The loop container (labeled “For each 2”) parses each tweet and tags it with all clubs mentioned within the tweet. It then inserts that data into an Azure SQL Database (via the “Insert row” component in red) for storage, querying, and analysis.

When I spun up the mood table for the first time, this is all it was. There was no scheduling component and it was just streaming tweets into my Azure SQL DB during all the matches. That proved to be prohibitively expensive, so let me walk you through some gotchas before we get into the scheduling components of the mood table.

Gotchas (i.e. how to not bankrupt yourself in Azure)

Gotcha 1: Twitter Connector Does Not Die (unless you kill it)

As I first started using the Twitter connector (screenshot below) I noticed that it asked a relatively simple question: “How often do you want to check for items?” Based on my research and discussion with others, our understanding was that the connector would wake up on that defined interval, check for tweets meeting the search criteria, and then shut down again until the next interval.

logic_app_2

Unfortunately, that understanding was incorrect. The first full weekend I ran the mood table it cost several hundred dollars as the connector never stopped running because there were always tweets for it to find. Because all of this is billed at a fixed cost per operation, the cost increased exponentially throughout the weekend, especially for matches involving large clubs with supporters throughout the world. My wife was relatively unimpressed by my mistake, so I wanted to mention it here to save you from a similar fate should you begin experimenting with this.

Gotcha 2: Scheduling Logic Apps Always Works (Eventually)

After a discussion with Azure Support to clarify exactly how the connector works, I dug into trying to figure out how to schedule these logic apps so they would enable and disable on command. That would allow me to do two things: 1) capture the time intervals Rog and Davo wanted during or after the match and 2) keep the cost manageable so I would not have to sell a limb or one of my children to continue financing the mood table.

That led me to Azure Job Collections and Azure Scheduler and Toon’s wonderful blog on how to wire this up. I will not rehash his blog here, but please go read it if you want a step-by-step walkthrough of what I did. One note I will add to Toon’s blog is that, wherever he references <workflow name>, that is referring to the name of the logic app itself. It took some trial and error for me to discover that so I want to save my readers that time and effort.

My job collection contains an enable and disable job for each club’s logic app (for a total of 40 jobs). They run on set schedules (that I manually update) to capture the 2-3 minutes before the final whistle and the 7-8 minutes after the final whistle. Rog wants those instant post-match sentiments, the gut reaction you have to your club’s match as the final whistle blows.

logic_app_3

This all seemed a brilliant idea until the next set of matches rolled around and I realized that some of the disable jobs failed their initial attempt – leading to yet more money accidentally donated to Microsoft. The Azure team’s holiday party likely got a bit better as a result of my mistakes!

Luckily, you can apply retry logic to these jobs (see below). I strongly recommend this as it will save you many disapproving looks from your significant other for making it rain in the Azure bill once again.

logic_app_4.PNG

Gotcha 3: Having A Laptop At The Pub Is Dorky (use the Azure iOS app)

The first weekend or two I ran the mood table stuff I was monitoring it via a laptop while watching matches at the pub. I’m told, from reliable sources, that that made me look like a dork. It also meant that I could not reliably monitor the jobs enabling and disabling on schedule unless I was somewhere with 1) room for a laptop and 2) decent Wi-Fi.

To use an obviously hypothetical example relayed to me by a friend of mine (wink wink, nudge nudge), let’s say a big match ended in the middle of a Sunday church service and “my friend” wanted to make sure the jobs kicked off on schedule. Taking out a laptop during the service would be deemed inappropriate by my friend’s family, his priest, and likely incurs a risk of him being struck by lightning were the Good Lord to catch a glimpse of such activity. How could my friend solve this issue?

He used the Azure Portal iOS app found here (it’s also available for Android). That allowed “my friend” to ensure that the club’s logic apps enabled and disabled on schedule and that the tweets were properly captured. I’m also delighted to report that “my friend” was not struck by lightning while checking this during church.

Things Deemed Too Optimal For The Mood Table

Originally the mood table had a live component with it: a Power BI report that accepted the tweet information via a streaming dataset and showed a live table throughout the matches with up-to-the-second rankings. While it was fascinating, Rog wanted to focus on the end-of-match sentiments for the static table.

The ability to analyze streaming sentiment data in this way, however, may make an appearance for individual Premier League matches or, possibly, a major soccer tournament taking place this summer (no, not the Alternative Facts World Cup).

Conclusion

If you’ve read all the way to the end, I salute you! I’ve enjoyed working with the guys on the Premier League mood table and I’ve enjoyed putting together this deep dive post for those of you that are interested. As I mentioned, I look forward to questions or corrections (throw them in the comments) or connections via Twitter or LinkedIn. Cheers (and come on you Spurs)!

 

 

 

 

 

Coming 12/8/17 – Premier League Mood Table How-to

Coming 12/8/17 – Premier League Mood Table How-to

Greetings GFOPs (and others who have clicked on this post)! There were a few mentions on social media that I’d be publishing two blogs here this afternoon. The first was going to be a high-level, layman’s explanation for how the Premier League Mood Table I did for Men In Blazers (b|t) works and the second was going to be a deep dive into the nuts and bolts of how I built it, the mistakes I made, the ways I fixed those mistakes, and just how flipping cool Azure Logic Apps and Social Sentiment Analysis are. That second blog will form the foundation for a SQL Saturday/conference talk I intend to submit and present throughout 2018 (feel free to contact me if this interests you and your user group/conference/organization).

These blogs were originally timed with the 12/1 release of MIB’s latest Raven (their newsletter for readers who don’t listen to their podcast – register here¬†if you’re not a subscriber) containing an interview with me, thus the handful of tweets and other social posts mentioning today as the day to take a look at my site.

Unfortunately, in typically suboptimal MIB fashion, the 12/1 Raven has been delayed until 12/8, thus Mauricio Pochettino (and I) make the sad face above. My blogs will be released in conjunction with that Raven, so we’ll see you back here next Friday. If you’re interested in soccer and/or data and/or social media analysis potentially relevant to your company, it will be worth a read. Have a great weekend and see you back here Friday, December 8!

 

Coast to Coast in the Next 30 Days!

Coast to Coast in the Next 30 Days!

It would be an understatement to say that I’m excited about this month in my SQL community life. While I have multiple submissions out to European conferences in the first half of 2018, this month’s highlights are two confirmed speaking engagements that I have: SQL Saturday Charlotte on October 14th and PASS Summit on November 3rd.

As my previous post mentioned, I used to live just south of Charlotte and haven’t been back in years, so I’m looking forward to seeing some friends in the area along with meeting and reconnecting with more #sqlfamily.

While Charlotte is going to be a great event (and you should definitely register here), the coolest thing to happen this month will be my PASS Summit speaking debut on Friday, November 3 at 11 AM local time. I’m incredibly proud to have been selected to speak at Summit and am looking forward to unveiling new elements and new demos during my “Where Should My Data Live (and Why)?” session.

This session is all about trying to open more traditional database administrator’s eyes to the opportunities that cloud platforms and technologies give them to leverage and extend their existing on-premises implementations and deployments. I look forward to sharing what I know and learning from the crowd about their own experiences so I can improve this talk in the future as I continue to speak and our data professional world continues to evolve. Hope to see you in Eastern time or Pacific time in the next 30 days!