Connecting Tableau with R: Combining Beauty and Power

The following is a guest post from Sinna Muthiah Meiyappan.

Tableau, as we all know, is the go-to tool for visualization. R is an open-source statistical language used by academicians, statisticians, data analysts and data scientists to develop machine learning models. R’s popularity is owed to the power of its packages.

Why Connect Tableau with R?

An analytics tool can generate enormous value if it has the following aspects:

1) a user-friendly interface for a business user
2) the scope and power to run different analyses and deliver the results to the interface in an easily interpretable way for the business user to understand and make decisions

Tableau can be used to develop beautiful dashboards which can serve as a very good user interface. R can be used to develop and run complex machine learning models whose results can be displayed in Tableau dashboards in an easily interpretable way.

In short, by linking Tableau and R, you are giving business users the opportunity to create enormous value by running and analyzing the results of complex machine learning models without having knowledge or experience in R!

How to Connect Tableau with R

1) Install Rserve package and run Rserve command by typing the following lines in your R console:

install.packages("Rserve")
Rserve()

Now, R should print ‘Starting Rserve…’. If you observe this output, then R is reaching out to Tableau for a connection.

2) Open Tableau & click on Help > Settings & Performance > Manage External Service Connections.
3) In the dialog box that opens, choose ‘localhost’ for Server and type ‘6311’ for Port.
4) Then, click on Test Connection

Now, you should get a dialog box stating, ‘Successfully connected to the R serve service’. If you observe this message, then you are all set to use the power of R from within Tableau.

Objective of this article

All companies maintain a database containing details about their customers. Customer Segmentation using clustering algorithms is a proven technique used by companies to understand its customers. This helps you offer targeted deals that convert better! The objective of this article is to create a dashboard where an executive from a wholesale business can segment customers using R within Tableau, without having to know R Programming, just by clicking in the dashboard. Then, visualize the different customer segments to find patterns that can be used to make business decisions like providing incentives, discounts, etc.

Dataset Used

The dataset referenced in this post can be downloaded in CSV format from UCI Machine Learning repository by clicking on ‘Data Folder’ link at the top left of the page. In this dataset, each row contains details about a customer of the Wholesaler regarding his channel, region and the dollar amount spent by him on different product categories like Fresh, Frozen, Grocery, Milk, Detergent Paper & Delicatessen. This dataset can be mined to understand the buying patterns/characteristics of customers based on their spending.

Designing a Customer Segmentation dashboard in Tableau

Loading data into Tableau

Open the CSV file, add a column named ‘Customer ID’ and fill it from 1 to 440.
Connect to the CSV file using the ‘Text File’ option in Tableau and you should see the below screen in your Tableau worksheet.

Picture1

Creating Parameters

Let’s cluster users based on their spending in different categories using k-means clustering algorithm. K-Means is an unsupervised learning algorithm that identifies clusters in data based on similarity of features used to form the clusters. In our case, these features would be the customers spending in different categories.

K-means algorithm requires the following two inputs for it to form clusters: (1) the number of clusters to be formed & (2) How many times to run the algorithm before picking the best clusters. Basically, the K-Means algorithm takes as many random center points as specified by the user in input 1 and starts assigning the data points to these center points based on the criteria defined to calculate distance. Once all points are assigned, it measures the Sum of Squared Errors (SSE). This process is repeated as many times as specified in input 2. Then, the model with the least SSE is picked and shown as the result.

Let’s give the option of specifying the number of clusters formed to the user who is going to use the dashboard by creating a Tableau parameter. To do that, right click on the Measures pane and click on ‘Create Parameter’. Fill the Create Parameter dialog box with details as shown below and click OK.

Picture2

Right click on the newly created ‘Centers’ parameter and select ‘Show Parameter Control’ and set Centers value to 5.
For our results to be reproducible, we need to ‘set seed’ in R. The seed defines the random numbers generated that are used in the algorithms. So, whenever we set the same seed and run a model, we would get the same results. Let us give this option of defining the seed also to our user by creating a parameter as follows:

Picture3

Right click on the newly created ‘Seed’ parameter and select ‘Show Parameter Control’ and set seed value to 500. Now, your screen should look like this:

Picture4

SCRIPT Functions to Receive Data From R

Now, to segment customers into different clusters, we are going to pass data to R, cluster it and get the results back in a Calculated Field that can be used in Tableau Visualizations. Tableau has four scripting functions which are related to the type of data they would receive from the external service Tableau is connected to. They are SCRIPT_REAL, SCRIPT_INT, SCRIPT_BOOL & SCRIPT_STR.

The syntax of a Script function is as follows:
SCRIPT_XXX (‘code to be executed as a string’, argument 1, argument2, etc.)

The arguments are referred as ‘.arg1’, ‘.arg2’, etc. respectively inside the code. All the arguments whether quantitative or categorical should be passed as aggregated versions to R since this is a table calculation. Since all our variables are quantitative, we aggregate them using the SUM function. The last line of the code in string would be returned as an output of the function by R to this calculated field.

So, let’s create a Calculated Field named ‘Cluster_kmeans’ with the following code in it and click OK:

Picture5

Code Recap:

1) Line 1: Sets seed, so that the results can be regenerated again if needed. As already said, K-means choses the center points at random and starts assigning data points to these cluster centers. The clustering results will change if these center points change. So, seed is used to define the random numbers generated so that if you set the same seed and run, the same center points will be taken up for clustering.

2) Line 2: Combines all features (i.e. the customer spends in each category) that must be used for clustering into a data frame.

3) Line 3: Runs the k Means model 20 times, in the above generated data frame, with as many cluster centers as specified by the user in the dashboard.

4) Line 4: The best model out of the above 20 is picked and its results are returned to the Cluster_kmeans calculated field in Tableau, which can be used in visualizations.

We pass 8 arguments to the Script function. First is Seed, second is Centers (as in the number of clusters that we want k-means algorithm to form for this data) and the remaining 6 are the spend in each category. The six spend arguments are vectors of 440 dollar amounts each (as there are 40 customers in our dataset).

What is interesting is, the Seed and Centers parameters are also passed as vectors of length 440, but of the same value. i.e. Seed has ‘500’ repeated 440 times. In R, we pass a single number as seed, not a vector of length more than 1. So, we extract the first element of the vector alone and pass it to ‘set.seed’ function as follows:

Set.seed(.arg1[1])

Similarly, to the centers argument of k-means algorithm also we pass only a single value. If given a single value, the algorithm generates that many number of random centers and starts assigning points to clusters. If you give a vector to this argument, then it takes the values in the vector as the spatial location of center points and starts assigning points to clusters. Now, if you pass a vector of length 440 where all values in the vector being the same, then you would get an error as follows:

Error in kmeans(ds, centers = .arg2, nstart = 20) : initial centers are not distinct

To avoid this, we extract the first element of the vector alone and pass it to ‘kmeans’ function as follows:

km_res <- kmeans(ds, centers = .arg2[1], nstart = 20)

Visualizing the clusters created by R

Now that the calculated field for clusters is created, we can start creating the visualization shown in the figure below. Before that, go to Analysis > Aggregate Measures and turn it off. Because, if you don’t, Tableau will aggregate all the spend for all customers and send a single row of input to R, resulting in the following error. This error says there are more than one center points to start with and only one data point that can be assigned. Hence, the clustering cannot be completed.

Error in kmeans(ds, centers = .arg2[1], nstart = 20) : more cluster centers than distinct data points

Picture6

This sheet describes the spend of each cluster in Fresh and Frozen categories. To create the above visualization in Tableau, do the following:

1) Drag and drop ‘Cluster_kmeans’ into ‘Columns’ shelf and into Color
2) Drag and drop ‘Fresh’ and ‘Frozen’ into ‘Rows’ shelf
3) Set the chart type to ‘Gantt Bar’
4) From ‘Analytics’ tab at the top left, drag and drop ‘Box Plot’ into ‘Cells’ of the chart
5) From ‘Analytics’ tab at the top left, drag and drop ‘Average Line’ into ‘Panes’ of the chart
6) Adjust Opacity in the color palette by setting it to 50%
7) Fix the Y-axis of the graphs to read from 0 to 115,000 so that it would be easier to compare across all the six charts we are going to develop for the dashboard. ( I chose 115,000 as the upper limit, because the max spend by a single customer in a single category is of the order of 112k)
8) You can adjust the tooltip to reflect the information that you deem necessary/relevant at this point.

Now, similarly create two other worksheets which would look as follows:

Picture7

This sheet describes the spend of each cluster in Milk and Groceries categories.

Picture8

This sheet describes the spend of each cluster in Delicatessen and Detergent Paper categories.

Creating Customer Segmentation Dashboard

Create a new dashboard and drag all three worksheets you created into the dashboard. Arrange the parameter controls and chart legends appropriately. Use blank containers to fill space if needed. Now your dashboard may look like this.

Picture9

So now, we have put together a dashboard where a business user (in this case, the Wholesaler) can choose how many clusters he wants to form from his customer data and understand about the different clusters formed. The cluster results generated are always reproducible if the user inputs the same seed again (in this case 500).

Recalculating Clusters at the Snap of Your Fingers!

To change the number of clusters from 5 to any other value, all the user must do is to change the number in the Centers parameter list box. Tableau will pass the updated parameter to R, clusters will be re-calculated through Table calculation in R, and the dashboard will be updated to reflect the new cluster details! The below image shows the dashboard when the Centers is changed from 5 to 3.

Picture10

Features of this Customer Segmentation Dashboard

Since, we fixed all Y axes to the same range, these charts are comparable across too.
The Gantt bar chart shows each customer as a bar in the chart. This will also help us identify if the clusters formed have too few customers in them (they will have very few bars) so that, we can be taking decisions keeping that in mind.
Hovering over the box plots in each pane will give details about the quantiles of spending by the respective cluster in that respective category.

Clicking on a specific cluster in the chart legend at the right top will highlight only that cluster in the dashboard by dimming others and all the average lines in each category pane will be recalculated to show the average spend of this specific cluster in that category.

Interpreting Results and Taking Actions to Generate Business Value

We, at Perceptive Analytics, feel that the main aim of any business intelligence dashboard is to deliver insight and inspire action of the business users. Because, business value will not be created until any one of them acts on the insights! It is our (dashboard designer’s) job to make sure that the dashboard is designed in a way such that the business users does not have to spend more time searching for insights or trying to understand the dashboard.

They should spend very less time in organizing data & more time in interpreting information / insights and creating / strategizing action plans.
Let’s now see what insights we can generate out of this dashboard with three clusters, shown above:

  • The highest spend in registered by a customer in Fresh category (at the left top of the dashboard).
  • Categories Fresh, Grocery, Milk and Detergents Paper have registered a decent number of spends above 20k dollars. So, these are the categories where the Wholesaler gets more money from his customers.
  • Looking at the spending pattern of the three clusters formed in the above graph, I can gather the following:
    • Cluster1 (Blue) seems to spend in all categories indifferently (In all categories, their spend is spread above and below the respective category average spend)
    • Cluster2 (Orange) spends way above category averages in grocery, milk and detergents paper categories
    • Cluster3 (Green) spends extra ordinarily in Fresh category and like Cluster1 in all other categories.

Within the same cluster, some customers purchase many items, others less. Recommendations can be provided to those who purchase less based on the missing items, compared to the customer who purchased more in the same category. Category related offer mails and discount mails can be targeted to those group of customers who actively purchase in that respective category.

Now, the wholesaler can create test and control groups within clusters and start sending targeted marketing campaigns to the customers in relevant segments. The test and control split would be useful to know if the campaigns are working or not. By continuously repeating this experiment of targeted marketing campaigns and analyzing the results after, the wholesaler can fine-tune himself to look for patterns in his customer data and know what kind of a marketing campaign will motivate them to purchase more.

Conclusion

In this article, we created a dashboard that enables a business user to segment customers and visualize the spending patterns of different clusters to understand customers better. We leveraged the power of R to segment the customers from within Tableau itself. This knowledge can be used to make business decisions like offering trade discounts or targeted promotional offers.

Author Bio

This article was contributed by Sinna Muthiah Meiyappan.

Image courtesy of ARPORN SEEMAROJ / 123rf.com

Advertisements

How to Drill into Data Using Set Actions in Tableau

Drilling with Set Actions

If you’ve ever tried to use the default drill functionality within Tableau, you know that it could be a more user friendly experience. The default table drill functionality opens all of the options at the next drill level which can force a user to lose sight of the data upon which they’re focusing. A more user-friendly option enables the user to only drill into a specific selected value where focus and attention can be maintained. This is otherwise known as asymmetric drill down.

Fortunately as of version 2018.3, Tableau has added Set Actions as a new functionality. At a high level, developers can take an existing set and update its values based upon a user’s actions in the visualization. The set can be employed via a calculated field within the visualization, via direct placement in the visualization or on the marks card property.

In lay terms this means empowering a user with more interactivity to impact their analyses.

In this first video, I’ll demonstrate a use of set actions on an NBA data set. We’ll drill from Conference to Division to Team to Player. This tip will be easily applicable to your Tableau data. And with the bonus tree-map tip you’ll release your inner Piet Mondrian.

Feel free to interact with the set action example on Tableau Public and then download and dissect the workbook.


Drilling with Level of Detail (LOD) Calculations
If you want to stay with a classic approach, a nice Level of Detail (LOD) workaround can be employed to drill into the next level. Here is a tip that accomplishes a similar outcome where I demonstrate a technique originally presented by Marc Rueter at Tableau Conference 2017.

Now that I’ve equipped you with the knowledge to incorporate customized drilling functionality into your analyses, go forth and do some great things with your data!

References:

https://onlinehelp.tableau.com/current/pro/desktop/en-us/actions_sets.htm
https://www.tableau.com/learn/tutorials/on-demand/set-actions
https://www.basketball-reference.com/leagues/NBA_2018.html
https://www.youtube.com/watch?v=d22A4XVoUEs

Image Copyright dzxy on 123rf.com

Yet Another Market Basket Analysis in Tableau

This video represents part two in my Market Basket Analysis series.

The steps in the post were inspired by the book Tableau Unlimited written by former co-worker of mine, Chandraish Sinha. I wasn’t planning to construct another market basket analysis video but when I saw the approach outlined in his book, I felt like it warranted sharing with my readers and followers.

In this version we’ll use default Tableau Superstore data to show the relationship between sub-categories on an Order; all without using a self table join. The visualization and analysis is driven by a user selection parameter.

Once the user selects a sub-category, the bar chart visualization updates to reflect the number of associated sub-category items on the same order.

Sample Superstore Data 2

Watch the video and as always get out there and do some great things with your data!

Feel free to also check out Part 1 here where we create a simpler correlation matrix version that shows all the sub-category relationships in one visual.

 

 

 

Market Basket Analysis in Tableau

 

A favored analysis technique employed by retailers to help them understand the purchase behavior of their customers is the market basket analysis. When you log on to Amazon, most likely you’ve noticed the “Frequently Bought Together” section where Jeff Bezos and company would like to cross-sell you additional products based upon the purchase history of other people who have purchased the same item.

Market Basket Analysis influences how retailers institute sales promotions, loyalty programs, cross-selling/up-selling and even store layouts.

If a retailer observes that most people who purchase Coca-Cola also purchase a package of Doritos (I know they’re competing companies), then it may not make sense to discount both items at once as the consumer might have purchased the associated item at full price anyhow. Understanding the correlation between products is powerful information.

In this video, we’ll use Tableau Superstore data to perform a simple market basket analysis.

Sample Superstore Data 2

Feel free to interact with this market basket analysis on Tableau Public and then download and dissect the workbook.

Watch the video and as always get out there and do some great things with your data.

Feel free to also check out Part 2 here where we’ll create an analysis driven by a user selection parameter.

Use Clustering Analysis in Tableau to Uncover the Inherent Patterns in Your Data

This following is a guest post.

Clustering:

Clustering is the grouping of similar observations or data points. Tableau enables clustering analysis by using the K-means model and a centroid approach. This model divides the data into k segments with a centroid in each segment. The centroid is the mean value of all points in that segment. The objective of this algorithm is to place centroids in segments such that the total sum of distances between centroids and points in their segments is as small as possible.

In this post we will demonstrate some of clustering’s practical applications using Tableau. To get started, download the dataset from this link.

Let’s get our hands dirty!

Examine the data-set, it contains data about different characteristics of flowers. Once the data is loaded into Tableau it will look like the screenshot below.

Picture1

Now let’s plot a visualization between petal width and length. Just drag and drop the petal width and length onto rows and columns as shown below.

Picture2

Here we see that there is only one data point as Tableau by default aggregates measures. We can “un-aggregate” the data with a click as shown below.

Picture3

Just go to the analysis tab in the menu and un-tick the aggregate measures option.

Picture4

Now we can observe a scatter plot of two measures. Let’s cluster these data points according to their species by navigating to the analytics pane as shown below.

Picture5

Drag and drop the cluster option on to the plot.

Picture6

Clusters are formed automatically, although there is an option to change the number of clusters. Users can also select the variables used for cluster generation, although Tableau uses the fields in the view to form the initial clusters.

Picture7

We can visually observe the clusters and Tableau provides a handy option that displays cluster statistics.

Picture8

Click on the “describe clusters” option to observe a summary and model description.

Picture9

The summary tab provides a high level overview of the variables used in the model and various sum of squares information. Let’s turn our attention to the models tab and the main generated statistics.

Picture10

F-Ratio:

The F-Ratio is used to determine if the expected values of a variable within groups differ from one another. It is the ratio of sum of squares (variances).

F= Between Group Variability/Within Group Variability

The greater the F-statistic, the better the corresponding variable in distinguishing between clusters.

P-Value:

In a statistical hypothesis test the P-value helps you determine the significance of your results. The p-value is the probability that the F-distribution of all possible values of the F-statistic takes on a value greater than the actual F-statistic for a variable. If the p-value falls below a specified significance level, then the null hypothesis can be rejected. The lesser the p-value, then more the expected values of the elements of the corresponding variable differ among clusters.

Tableau provides an option to save formed clusters into a group that can be used for subsequent analyses. Simply drag and drop the cluster from the marks pane to the dimensions section to save it as group.

Picture11

Tableau doesn’t allow clustering on these types of fields:

  • Dates
  • Bins
  • Sets
  • Table Calculations
  • Blended Calculations
  • Ad-hoc Calculations
  • Parameters
  • Generated Longitude and Latitude Values

Let’s look at another example using the default World Indicators data set that comes with Tableau. Open the sample workbook named World Indicators and explore the data regarding various countries.

Picture12

Try using different variables to form clusters. Use the model description to learn about the various countries based upon their clusters.

Picture13_1

Here it shows average life expectancy, average population above 65 years and urban population. These statistics provide insight into the composition of the particular clusters. We can see which countries comprise each cluster as shown below. Select any cluster and go to the “Show Me” tab and select text “Table” to view the names of each country present in a cluster.

Picture14

Conclusion:

We’ve only covered a few scenarios using clustering and how it aids with the segmentation of data. Clustering is an essential function of exploratory data mining. Keep exploring the results of cluster analysis by using different types of data sets. Keep Rocking!

“Happy Clustering!!”

Author Bio

This article was contributed by Juturu Pavan, Prudhvi Sai Ram, Saneesh Veetil and Chaitanya Sagar contributed to this article.

Use the Power BI Switch Function to Group By Date Ranges

In this latest video, I’ll explain how to use a handy DAX function in Power BI in order to group dates together for reporting. We’ll examine a dashboard that contains fields corresponding to purchase item, purchase date and purchase cost. We’ll then create a calculated column and use the SWITCH function in Power BI to perform our date grouping on the purchase date.

Watch the video to learn how to group dates into the following aging buckets, which can be customized to fit your specific need.

  • 0-15 Days
  • 16-30 Days
  • 31-59 Days
  • 60+ Days

If you are familiar with SQL, then you’ll recognize that the SWITCH function is very similar to the CASE statement; which is SQL’s way of handling IF/THEN logic.

Even though we’re creating a calculated column within Power BI itself, best practice is to push calculated fields to the source when possible. The closer calculated fields are to the underlying source data, the better the performance of the dashboard.

My Submission to the University of Illinois at Urbana-Champaign’s Data Visualization Class

I’m a huge fan of MOOCs (Massive Open Online Courses). I am always on the hunt for something new to learn to increase my knowledge and productivity; and because I run a blog, MOOCs provide fodder for me to share what I learn.

I recently took the Data Visualization class offered by the University of Illinois at Urbana-Champaign on Coursera. The class is offered as part of the Data Mining specialty of six courses that when taken together can lead to graduate credit in its online Master of Computer Science Degree in Data Science.

Ok enough with the brochure items. For the first assignment I constructed a visualization based upon temperature information from NASA’s Goddard Institute for Space Studies (GISS).

Data Definition:

In order to understand the data, you have to understand why temperature anomalies are used as opposed to raw absolute temperature measurements. It is important to note that the temperatures shown in my visualization are not absolute temperatures but rather temperature anomalies.

Basic Terminology

Here’s an explanation from NOAA:

“In climate change studies, temperature anomalies are more important than absolute temperature. A temperature anomaly is the difference from an average, or baseline, temperature. The baseline temperature is typically computed by averaging 30 or more years of temperature data. A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline.”

Interpreting the Visualization

The course leaves it up to the learner to decide which visualization tool to use in order to display the temperature change information. Although I have experience with multiple visualization programs like Qlikview and Power BI, Tableau is my tool of choice. I didn’t just create a static visualization, I created an interactive dashboard that you can reference by clicking below.

From a data perspective, I believe the numbers in the file that the course provides is a bit different than the one I am linked to here but you can see the format of the data that needs to be pivoted in order to make an appropriate line graph.

All of the data in this set illustrates that temperature anomalies are increasing from the corresponding 1951-1980 mean temperatures as years progress. Every line graph of readings from meteorological stations shows an upward trend in temperature deviation readings. The distribution bins illustrate that the higher temperature deviations occur in more recent years. The recency of years is indicated by the intensity of the color red.

Let’s break down the visualization:

UIUC Top Portion

Top Section Distribution Charts:

  • There are three sub-sections representing global, northern hemisphere and southern hemisphere temperature deviations
  • The x axis represents temperature deviations in bins of 10 degrees
  • The y axis is a count of the number of years that fall between the binned temperature ranges
    • For example, if 10 years have a recorded temperature anomaly between 60 and 69 degrees, then the x axis would be 60 and the y axis would be 10

UIUC Distribution Focus.png

  • Each 10 degree bin is comprised of the various years that correspond to a respective temperature anomaly range
    • For example in the picture above, the year 1880 (as designated by the tooltip) had a temperature anomaly that was 19 degrees lower than the 30 year average. This is why the corresponding box for the year 1880 is not intensely colored.
    • Additionally, the -19 degree anomaly is located in the -10 degree bin (which contains anomalies from -10 to -19 degrees)
    • These aspects are more clearly illustrated when interacting with the Tableau Public dashboard
  • The intensity of the color of red indicates the recency of the year; for example year 1880 would be represented as white while year 2014 would be indicated by a deep red color

Bottom Section Line Graph Chart:

UIUC Bottom Portion

  • The y axis represents the temperature deviation from the corresponding 1951-1980 mean temperatures
  • Each line represents the temperature deviation at a specific geographic location during the 1880-2014 period
  • The x axis represents the year of the temperature reading

UIUC Gobal Average

In the above picture I strip out the majority of lines leaving only the global deviation line. Climate science deniers may want to look away as the data clearly shows that global temperatures are rising.

Bottom Line:

All in all I thought it was a decent class covering very theoretical issues regarding data visualization. Practicality is exclusively covered in the exercises as the class does not provide any instruction on how to use any of the tools required to complete the class. I understand the reason as this is not a “How to Use a Software Tool” class.

I’d define the exercises as “BYOE” (i.e., bring your own expertise). The class forces you to do your own research in regards to visualization tool instruction. This is especially true regarding the second exercise which requires you to learn how to visualize graphs and nodes. I had to learn how to use a program called Gephi in order to produce a network map of the cities in my favorite board game named Pandemic. The lines between the city nodes are the paths that one can travel within the game.

UIUC Data Viz Week 3

If you’re looking for more practicality and data visualization best practices as opposed to hardcore computer science topics take a look at the Coursera specialization from UC Davis called “Visualization with Tableau”.

In case you were wondering I received at 96% grade in the UIUC course.

My final rating for the class is 3 stars out 5; worth a look.