I had one goal this week: could I show all the measures in the space the orginal scorecard shows two?
The answer? Yes, and not just by using 6pt fonts!
Bullets are an easy way to see actual against a target, and they take up way less space than curvy bar charts. Missed targets are encoded twice: the bar is below the target reference line and the label is red.
One thing I don’t like about my approach is the arbitrary axis lengths. Some of the metrics are percentages, so you can set the axis range from 0-100%. That way the viewer can see three things:
The actual value
The distance from the target value
How close actual/target are from perfection.
Where the metrics are values, how should you set the axis range? Look at the charts on the right hand side. They are all very different scales. Should I let the chart tool set the range automatically? If I do that, the bar or reference line will be right at the right hand edge of the view. Or should I artificially extend the axis, creating a nicer sense of white space?
There’s lots to like about the original:
There’s a thumbs up/thumbs down for good/bad performance. That makes it easy to identify which metrics are being met.
The actual value is labelled in the centre of the circle.
The targets are defined in text
The main thing to dislike is the curvy bars. They don’t add anything, other than a sense of colour and false excitement. Really, to fix this scorecard, all they’d need to do would be to flatten out the bars and shrink the layout.
This week’s original focusses on the worst states to drive in. It’s nearly Christmas so I wanted to turn that around and take a more positive approach: which state is the best. Turns out it’s Minnesota.
My makeover removes almost all detail (ie 49 out of 50 states!) but I decided after exploring the different metrics to focus on a single message: stay safe, rather than let people investigate the data in each state.
You can see that process in the GIF showing my exploration, below. I looked for correlations and patterns, but once I hit the map, I realised I wanted to focus on Minnesota, and spent about half my time getting the display just so.
I want to know what your favourite week was, and why. What have you learnt? What have been your highlights (and lowlights)? What’s the effect been on your community? And the wider dataviz community?
I’ll be writing a post on tableau.com before Christmas so please share you reflections with me, in the comments, on Twitter, on your blogs, or anywhere else I’ll see them!
* What does “coming to an end” mean? Andy will continue to add datasets each week. However, as of the end of the year, we will cease to update the Pinterest board and the dashboard of statistics. We hope you all still continue!
Andy Kirk and I did the 2016 #AskAndy anything webinar today. We hope you enjoyed it. Let us know your thoughts on Twitter using #AskAndy. This post contains the slides and links to the resources we shared.
I went super simple this week: all you can do in my viz is select a country and see where people went to. You can only see one origin country and I only exposed the most recent year.
The original chord diagram lets you see a very large amount of the dataset simultaneously.
I used to dislike chord diagrams: too complex. really messy, incomprehensible. But then this chord diagram was presented at Graphical Web in 2014, and it changed my mind. Why?
The designers took the time to explain how the chord diagram worked. Once you have worked out the mechanisms, the data pops out and becomes clear. Taking the time to read the instructions and learn how to read a chord diagram is time worth investing
Chord diagrams require interactivity, and that’s fine. The initial state is an overhelming confusion of lines. Interacting brings it to life. Charts that require interactivity can still be valid.
I do not believe there is another way to visualise flow that has so much detail. My own makeover this week is an admission of that: I’m using filters to show only a part of the data. Andy’s own makeover is a massive simplification of the dataset. It’s fine, as a matrix, but comes at the expense of detail, which the chord does contain. Almost all of this week’s makeovers show only a slice of the full data. Only the chord diagram allows you to access it all with ease.
People shouldn’t shy away from complex charts. Chord diagams do not provide instant insight: you need to invest time to read it. That is not a reason to shy away from a chart. Alan Smith discussed this on the PolicyViz podcast: he explained why they used a chord diagram in the FT this summer, knowing it was a chart that needed time to digest. That’s well worth a listen.
Chord diagrams cope with a range within your measures. Some countries have really huge numbers of people moving, while others have tiny. The outliers dwarf everything else when you encode with colour or length. I think width is a more successful encoding in this case.
I love the original chart. It’s visually striking, it’s engaging and there is a vast amount of detail available in one view, once you’ve devoted the time to learn how it works.
This week provided a good challenge. It’s difficult to present data which divides one percentage (US Wealth) into categories about another percentage (household income).
My first try is with an Area chart. I like the area chart because it shows part-to-whole for the entirety of US Wealth:
But it doesn’t quite punch home the differential between bottom 90 and top 0.5. Could I do that another way?
I chose to drop the history and focus on just the most recent year.
How about a stacked bar?
Or a bar chart?
They’re ok but the fundamental problem is that this approach doesn’t capture the size of “Bottom 90%”. The words “Bottom 90” don’t capture that magnitude of the inequality.
To tell this story in the most powerful way, I think we’d need a way to encode the 90%/0.5% households, too. And rather than spend time making that viz, I’ll share this video instead. It does one of the best jobs of showing the extent of inequality I’ve ever seen:
We got to makeover with my favourite dataset this week. The full wildlife strike dataset is one of the best to explore. It has a great mix of measures and dimensions, and seemingly endless stories to find in it.
Our source was Kelly Martin’s excellent take on the data. Is it an amazing dashboard? Yes, without a doubt. Is it perfect? Of course not: the perfect visualization does not exist.
There is much to love about Kelly’s dashboard. Here’s a few things that stick out:
Lovely title – a play on Superman motto and sets up the viewer for exploration
Really nice layout with enough intrigue and flow to keep the viewer’s eye moving around the views to find out more
Effective use of colour: only one view has colour, which I find very pleasing to look at
Great annotations add some focus where it’s needed.
Log scale on y-axis condenses the data nicely (but be honest, did you notice it?!)
But all dashboards can be improved. Here are some of the challenges with this dashboard:
The axes are ‘invisible’. I think it took me several return visits to this dashboard to even notice the x and y axes. They seem to be a long way from the data, but there are actually some data points right at the left hand side. Have you ever noticed them?
The floating map and extent of the axes make me wonder how many data points are hidden behind the map? I suspect not many because low velocity strikes must surely happen on the ground.
It’s not clear what the marks on the scattterplot represent. They’re beautiful, for sure, but ask yourself (without using tooltips): what does each mark represent? I was unable to answer that question. Even with the tooltip it’s hard to describe what each mark shows. Don’t believe me? Then tell me in the comments what each mark shows, and how long it took you to work it out.
I loved Chris’ original treemap (for reasons explained below). When it was made Viz of the Day, I heard lots of people say that it was a terrible choice: “you can’t make any insight out of that treemap”, they said. However, I sat in a big group of customers and partners that day, and showed the viz on a screen. What happened? They engaged in it – the treemap generated curiosity in a way my bar chart doesn’t. The subtle use of highlighting on Chris’ original teases people into exploring the data.
However, one of the first things I’d noticed was that the most common words were also the most common words in English. For my makeover, I wanted to exclude those words. I downloaded that data from Wikipedia.
It turns out that ‘baby’, ‘oh’ and ‘yeah’ are the most common of the uncommon words (if I did this again, I probably exclude the next few hundred common words to start getting to the uncommon ones). I like that “Na” is in this list solely because of Hey Jude.
We needed some Austin-related data for MakeoverMonday live at Tableau Conference. We turned to Restaurant Inspection scores from Austin’s data site.
I went in search of lunch in order to do the makeover, and found myself in Franks, home of hot dogs and cold beer. I sat down and ordered a bacon-infused Bloody Mary. Seriously? Bacon in a Bloody Mary? It was amazing.
Anyway, it got me wondering how well Frank’s had performed in recent inspections. That led my direction. I reduced the entire dataset to just Frank’s inspections. Turns out their last inspection was right on the borderline of failure.
My conclusion? Wonderful Bloody Mary. They passed my Restaurant Inspection!
Andy and I hope you all enjoyed MakeoverMonday live, wherever in Austin you ended up doing it.