Yet Another Market Basket Analysis in Tableau

This video represents part two in my Market Basket Analysis series.

The steps in the post were inspired by the book Tableau Unlimited written by former co-worker of mine, Chandraish Sinha. I wasn’t planning to construct another market basket analysis video but when I saw the approach outlined in his book, I felt like it warranted sharing with my readers and followers.

In this version we’ll use default Tableau Superstore data to show the relationship between sub-categories on an Order; all without using a self table join. The visualization and analysis is driven by a user selection parameter.

Once the user selects a sub-category, the bar chart visualization updates to reflect the number of associated sub-category items on the same order.

Sample Superstore Data 2

Watch the video and as always get out there and do some great things with your data!

Feel free to also check out Part 1 here where we create a simpler correlation matrix version that shows all the sub-category relationships in one visual.

 

 

 

Advertisements

Market Basket Analysis in Tableau

 

A favored analysis technique employed by retailers to help them understand the purchase behavior of their customers is the market basket analysis. When you log on to Amazon, most likely you’ve noticed the “Frequently Bought Together” section where Jeff Bezos and company would like to cross-sell you additional products based upon the purchase history of other people who have purchased the same item.

Market Basket Analysis influences how retailers institute sales promotions, loyalty programs, cross-selling/up-selling and even store layouts.

If a retailer observes that most people who purchase Coca-Cola also purchase a package of Doritos (I know they’re competing companies), then it may not make sense to discount both items at once as the consumer might have purchased the associated item at full price anyhow. Understanding the correlation between products is powerful information.

In this video, we’ll use Tableau Superstore data to perform a simple market basket analysis.

Sample Superstore Data 2

Feel free to interact with this market basket analysis on Tableau Public and then download and dissect the workbook.

Watch the video and as always get out there and do some great things with your data.

Feel free to also check out Part 2 here where we’ll create an analysis driven by a user selection parameter.

When Corporate Layoffs Don’t Work

“When downsizing is a knee-jerk reaction, it has long-term costs. Employees and labor costs are rarely the true source of the problems facing an organization. Workers are more likely to be the source of innovation and renewal.” [1]

Case in Point: Circuit City Laid Off Employees for Over-performance

There were a combination of factors that lead to the demise of former electronics retailer Circuit City. A number of these reasons were self-inflicted wounds. The company located its stores in subprime locations, stopped selling appliances to cut warehouse storage and distribution costs and underinvested in its web presence at a time when consumer preferences were beginning to shift online.

However, the company’s biggest blunder was its decision to layoff its most experienced and knowledgeable sales persons while trying to compete in the competitive electronics retail marketplace. In March of 2007, Circuit City announced a scheme to layoff 3,400 hourly workers (roughly 8% of its workforce), while offering a severance package with the ability to reapply to former jobs at a reduced salary. Any reapplications had to occur after a mandatory 10 week cooling off period. Circuit City practiced genteelism by branding its cost cutting and de-skilling scheme a “wage management initiative”.

Management decided to staff its stores with fewer people, with fewer skills, making less money and expected this combination to yield long term positive results. As a result of the layoffs, Circuit City placed knowledgeable, experienced sales staff on a platter and served them to its main competitor, Best Buy. Additionally, where did Circuit City expect to find quality people who would work for a company that did not value loyalty, experience and wage increases?

“From a strategy perspective, customer-facing sales personnel would appear to be a core resource and potential differentiator for a consumer products retailer,” he [Kevin Clark, an assistant professor of management at Villanova School of Business] says. “Especially in an era of rapidly changing and more complex consumer electronics, knowledgeable sales personnel who are perceived by customers as ‘experts’ can be a source of competitive advantage.” [2]

Reportedly, “employees who were paid more than 51 cents above a set pay range for their departments were fired.” [3] However, solidifying the trope of senior executives reaping the gains without the pains, the CEO and Chairman of Circuit City received almost $10 million in various kinds of compensation for steering the company to its imperiled state. [4]

In under two years (i.e., November 2008), Circuit City announced it was going out of business. By laying off its highest paid hourly workers and replacing them with cheaper less skilled workers, in-store customer service levels plummeted which negatively impacted customer perception and sales.

Southwest Airlines Gets it Right

Waving flag of Southwest Airlines editorial 3D rendering

Treating employees as mere cogs and judging employees by costs and not by the overall value they create is self-defeating.

Some companies don’t understand that making workers happy leads to elevated productivity and higher retention levels. High employee morale should be table-stakes, instead it is a strategic key differentiator. Southwest Airlines has never had a layoff in its 47 plus years of existence. That’s laudable when you consider that airlines endured the fallout from 9/11 and the Great Recession (when oil prices spiked over $100 a barrel). As a well deserved consequence, Southwest Airlines routinely leads domestic airlines in customer satisfaction.

Consider this example of how Southwest Airlines treated its recruiting team during the global financial crisis:

“At one point, however, Southwest Airlines was staring at a tough time financially and it did ‘corporate redeployment’. It had 82 employees in the recruiting team. When the company put [in] a hiring freeze, it also wondered what to do with 82 of its employees in this particular team. The company utilised them for customer service. The result: Customer satisfaction went up as a result of this team’s enhanced skill set. When the economy recovered, the team went back to its original job; only this time, they had an additional skill set, which helped the company and the customers alike.” [1]

If you were in the airline industry would you rather work for Southwest Airlines or another domestic competitor (that I mercifully will not name) which embodies layoffs, labor strife and toxic mismanagement of employees?

The Negative Impact of Layoffs

There is a time and place for layoffs. However, more often than not, companies layoff employees during down times in the business cycle to simply lessen the impact on profits, not to avoid a collapse of the business. Against their own best interests, companies also announce layoffs during times of rising profits which causes their best people to head for greener pastures. Any expected cost savings are negated by lower productivity (when the best performers leave), lower innovation and a remaining demoralized workforce subjected to the negative effects of survivor syndrome (i.e., the feeling of guilt after seeing longtime co-workers discarded).

Additionally companies are impacted by “Brand equity costs—damage to the company’s brand as an employer of choice.” [1]. Sites like Glassdoor offer unfairly laid off employees the opportunity to share their sense of betrayal online which can significantly impact a company’s reputation.

Shortsighted management typically operates under the assumption that layoffs will positively impact shareholders. While financial analysts may cheer downsizing efforts, research indicates that layoffs have negative effects on share prices.

“A recent analysis of 41 studies covering 15,000 layoff announcements in more than a dozen countries over 31 years concluded that layoff announcements have an overall negative effect on stock-market prices. This remains true whatever the country, period of time or type of firm considered.”[1]

It should come as no surprise that Circuit City’s stock price fell 4% the day after the company pulled the plug on its most experienced employees. [5]

References:

[1] Employment Downsizing and its Alternatives. Retrieved from https://www.shrm.org/foundation/ourwork/initiatives/resources-from-past-initiatives/Documents/Employment%20Downsizing.pdf

[2] Circuit City plan: Bold strategy or black eye? NBC News. April 2, 2007. Retrieved from http://www.nbcnews.com/id/17857697/ns/business-careers/t/circuit-city-plan-bold-strategy-or-black-eye/

[3] Circuit City Cuts 3,400 ‘Overpaid’ Workers: Washington Post. March 29, 2007. Retrieved from http://www.washingtonpost.com/wp-dyn/content/article/2007/03/28/AR2007032802185.html

[4] Thousands Are Laid Off at Circuit City. What’s New?. New York Times. April 2, 2007 https://www.nytimes.com/2007/04/02/business/media/02carr.html

[5] It’s the Workforce, Stupid! The New Yorker. April 30, 2007. Retrieved from https://www.newyorker.com/magazine/2007/04/30/its-the-workforce-stupid

Circuit City Image Copyright : nazdravie

Use Clustering Analysis in Tableau to Uncover the Inherent Patterns in Your Data

This following is a guest post contributed by Perceptive Analytics.

Clustering:

Clustering is the grouping of similar observations or data points. Tableau enables clustering analysis by using the K-means model and a centroid approach. This model divides the data into k segments with a centroid in each segment. The centroid is the mean value of all points in that segment. The objective of this algorithm is to place centroids in segments such that the total sum of distances between centroids and points in their segments is as small as possible.

In this post we will demonstrate some of clustering’s practical applications using Tableau. To get started, download the dataset from this link.

Let’s get our hands dirty!

Examine the data-set, it contains data about different characteristics of flowers. Once the data is loaded into Tableau it will look like the screenshot below.

Picture1

Now let’s plot a visualization between petal width and length. Just drag and drop the petal width and length onto rows and columns as shown below.

Picture2

Here we see that there is only one data point as Tableau by default aggregates measures. We can “un-aggregate” the data with a click as shown below.

Picture3

Just go to the analysis tab in the menu and un-tick the aggregate measures option.

Picture4

Now we can observe a scatter plot of two measures. Let’s cluster these data points according to their species by navigating to the analytics pane as shown below.

Picture5

Drag and drop the cluster option on to the plot.

Picture6

Clusters are formed automatically, although there is an option to change the number of clusters. Users can also select the variables used for cluster generation, although Tableau uses the fields in the view to form the initial clusters.

Picture7

We can visually observe the clusters and Tableau provides a handy option that displays cluster statistics.

Picture8

Click on the “describe clusters” option to observe a summary and model description.

Picture9

The summary tab provides a high level overview of the variables used in the model and various sum of squares information. Let’s turn our attention to the models tab and the main generated statistics.

Picture10

F-Ratio:

The F-Ratio is used to determine if the expected values of a variable within groups differ from one another. It is the ratio of sum of squares (variances).

F= Between Group Variability/Within Group Variability

The greater the F-statistic, the better the corresponding variable in distinguishing between clusters.

P-Value:

In a statistical hypothesis test the P-value helps you determine the significance of your results. The p-value is the probability that the F-distribution of all possible values of the F-statistic takes on a value greater than the actual F-statistic for a variable. If the p-value falls below a specified significance level, then the null hypothesis can be rejected. The lesser the p-value, then more the expected values of the elements of the corresponding variable differ among clusters.

Tableau provides an option to save formed clusters into a group that can be used for subsequent analyses. Simply drag and drop the cluster from the marks pane to the dimensions section to save it as group.

Picture11

Tableau doesn’t allow clustering on these types of fields:

  • Dates
  • Bins
  • Sets
  • Table Calculations
  • Blended Calculations
  • Ad-hoc Calculations
  • Parameters
  • Generated Longitude and Latitude Values

Let’s look at another example using the default World Indicators data set that comes with Tableau. Open the sample workbook named World Indicators and explore the data regarding various countries.

Picture12

Try using different variables to form clusters. Use the model description to learn about the various countries based upon their clusters.

Picture13_1

Here it shows average life expectancy, average population above 65 years and urban population. These statistics provide insight into the composition of the particular clusters. We can see which countries comprise each cluster as shown below. Select any cluster and go to the “Show Me” tab and select text “Table” to view the names of each country present in a cluster.

Picture14

Conclusion:

We’ve only covered a few scenarios using clustering and how it aids with the segmentation of data. Clustering is an essential function of exploratory data mining. Keep exploring the results of cluster analysis by using different types of data sets. Keep Rocking!

“Happy Clustering!!”

 Author Bio

This article was contributed by Perceptive Analytics. Juturu Pavan, Prudhvi Sai Ram, Saneesh Veetil and Chaitanya Sagar contributed to this article.

Perceptive Analytics provides Tableau Consulting, data analytics, business intelligence and reporting services to e-commerce, retail, healthcare and pharmaceutical industries. Our client roster includes Fortune 500 and NYSE listed companies in the USA and India.

Use the Power BI Switch Function to Group By Date Ranges

In this latest video, I’ll explain how to use a handy DAX function in Power BI in order to group dates together for reporting. We’ll examine a dashboard that contains fields corresponding to purchase item, purchase date and purchase cost. We’ll then create a calculated column and use the SWITCH function in Power BI to perform our date grouping on the purchase date.

Watch the video to learn how to group dates into the following aging buckets, which can be customized to fit your specific need.

  • 0-15 Days
  • 16-30 Days
  • 31-59 Days
  • 60+ Days

If you are familiar with SQL, then you’ll recognize that the SWITCH function is very similar to the CASE statement; which is SQL’s way of handling IF/THEN logic.

Even though we’re creating a calculated column within Power BI itself, best practice is to push calculated fields to the source when possible. The closer calculated fields are to the underlying source data, the better the performance of the dashboard.

My Submission to the University of Illinois at Urbana-Champaign’s Data Visualization Class

I’m a huge fan of MOOCs (Massive Open Online Courses). I am always on the hunt for something new to learn to increase my knowledge and productivity; and because I run a blog, MOOCs provide fodder for me to share what I learn.

I recently took the Data Visualization class offered by the University of Illinois at Urbana-Champaign on Coursera. The class is offered as part of the Data Mining specialty of six courses that when taken together can lead to graduate credit in its online Master of Computer Science Degree in Data Science.

Ok enough with the brochure items. For the first assignment I constructed a visualization based upon temperature information from NASA’s Goddard Institute for Space Studies (GISS).

Data Definition:

In order to understand the data, you have to understand why temperature anomalies are used as opposed to raw absolute temperature measurements. It is important to note that the temperatures shown in my visualization are not absolute temperatures but rather temperature anomalies.

Basic Terminology

Here’s an explanation from NOAA:

“In climate change studies, temperature anomalies are more important than absolute temperature. A temperature anomaly is the difference from an average, or baseline, temperature. The baseline temperature is typically computed by averaging 30 or more years of temperature data. A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline.”

Interpreting the Visualization

The course leaves it up to the learner to decide which visualization tool to use in order to display the temperature change information. Although I have experience with multiple visualization programs like Qlikview and Power BI, Tableau is my tool of choice. I didn’t just create a static visualization, I created an interactive dashboard that you can reference by clicking below.

From a data perspective, I believe the numbers in the file that the course provides is a bit different than the one I am linked to here but you can see the format of the data that needs to be pivoted in order to make an appropriate line graph.

All of the data in this set illustrates that temperature anomalies are increasing from the corresponding 1951-1980 mean temperatures as years progress. Every line graph of readings from meteorological stations shows an upward trend in temperature deviation readings. The distribution bins illustrate that the higher temperature deviations occur in more recent years. The recency of years is indicated by the intensity of the color red.

Let’s break down the visualization:

UIUC Top Portion

Top Section Distribution Charts:

  • There are three sub-sections representing global, northern hemisphere and southern hemisphere temperature deviations
  • The x axis represents temperature deviations in bins of 10 degrees
  • The y axis is a count of the number of years that fall between the binned temperature ranges
    • For example, if 10 years have a recorded temperature anomaly between 60 and 69 degrees, then the x axis would be 60 and the y axis would be 10

UIUC Distribution Focus.png

  • Each 10 degree bin is comprised of the various years that correspond to a respective temperature anomaly range
    • For example in the picture above, the year 1880 (as designated by the tooltip) had a temperature anomaly that was 19 degrees lower than the 30 year average. This is why the corresponding box for the year 1880 is not intensely colored.
    • Additionally, the -19 degree anomaly is located in the -10 degree bin (which contains anomalies from -10 to -19 degrees)
    • These aspects are more clearly illustrated when interacting with the Tableau Public dashboard
  • The intensity of the color of red indicates the recency of the year; for example year 1880 would be represented as white while year 2014 would be indicated by a deep red color

Bottom Section Line Graph Chart:

UIUC Bottom Portion

  • The y axis represents the temperature deviation from the corresponding 1951-1980 mean temperatures
  • Each line represents the temperature deviation at a specific geographic location during the 1880-2014 period
  • The x axis represents the year of the temperature reading

UIUC Gobal Average

In the above picture I strip out the majority of lines leaving only the global deviation line. Climate science deniers may want to look away as the data clearly shows that global temperatures are rising.

Bottom Line:

All in all I thought it was a decent class covering very theoretical issues regarding data visualization. Practicality is exclusively covered in the exercises as the class does not provide any instruction on how to use any of the tools required to complete the class. I understand the reason as this is not a “How to Use a Software Tool” class.

I’d define the exercises as “BYOE” (i.e., bring your own expertise). The class forces you to do your own research in regards to visualization tool instruction. This is especially true regarding the second exercise which requires you to learn how to visualize graphs and nodes. I had to learn how to use a program called Gephi in order to produce a network map of the cities in my favorite board game named Pandemic. The lines between the city nodes are the paths that one can travel within the game.

UIUC Data Viz Week 3

If you’re looking for more practicality and data visualization best practices as opposed to hardcore computer science topics take a look at the Coursera specialization from UC Davis called “Visualization with Tableau”.

In case you were wondering I received at 96% grade in the UIUC course.

My final rating for the class is 3 stars out 5; worth a look.

How to Dynamically Pivot Data in SQL Server

 

SQL is the lifeblood of any data professional. If you can’t leverage SQL and you work with data, your life will be more difficult than it needs to be.

In this video I am using SQL Server Express to turn a simple normalized dataset into a pivoted dataset. This is not a beginner video as I assume you are familiar with basic SQL concepts.

T-SQL is Microsoft’s SQL language that contains additional functions and capabilities over and above ANSI standards. We’ll use some of these functions to turn the following data set that displays average rents in major American cities into a pivoted denormalized dataset.

The City values in the City column will become individual columns in a new pivoted dataset with their respective Average Rent values appearing underneath.

We’re going to transform this:

Normalized Data

Into this:

Pivoted Data

Notice how the city values are now column heads and the respective Average Rent values are underneath.

Make sure you watch the video but here is the code used in the example.

IF OBJECT_ID('tempdb..##TBL_TEMP') IS NOT NULL
DROP TABLE ##TBL_TEMP

--This parameter will hold the dynamically created SQL script
DECLARE   @SQLQuery AS NVARCHAR(MAX)

--This parameter will hold the Pivoted Column values
DECLARE   @PivotColumns AS NVARCHAR(MAX)

SELECT   @PivotColumns= COALESCE(@PivotColumns + ',','') + QUOTENAME([City])
FROM [dbo].[tbl_Rent]

/* UNCOMMENT TO SEE THE NEW COLUMN NAMES THAT WILL BE CREATED */
--SELECT   @PivotColumns

--Create the dynamic query with all the values for
--pivot column at runtime
--LIST ALL FILEDS EXCEPT PIVOT COLUMN

SET   @SQLQuery =
   N'SELECT [City Code],[Metro],[County],[State],[Population Rank],' +   @PivotColumns + '
   INTO ##TBL_TEMP
   FROM [dbo].[tbl_Rent]
   
   PIVOT( MAX([Average Rent])
      FOR [City] IN (' + @PivotColumns + ')) AS Q'

/* UNCOMMENT TO SEE THE DYNAMICALLY CREATED SQL STATEMENT */
--SELECT   @SQLQuery
--Execute dynamic query
EXEC sp_executesql @SQLQuery

/* VIEW PIVOTED TABLE RESULTS */
Select * from ##TBL_TEMP

 

Big shoutout to StackOverflow for help with this example.