In this blog post I am going to discuss the statistics behind the differences in the success rates of Kickstarter projects based upon which day of the week the project is launched or ends. Certainly, this topic is one that many have suggested answers for, but I am curious what the data says about the matter.

Well, let me start by sayingâ€¦ no one can actually say whether certain days of the week are â€śgoodâ€ť days or â€śbadâ€ť days to launch. Rather, we can only hope to determine if the data shows any type of significant difference in the success rates of projects, based purely upon the day of the week it started or ended.

Before we continue I should say that this analysisÂ did influence my thinking when I created my current Kickstarter project, so hopefully it can add some value to yours as well.

Okay, enough intro letâ€™s get into the nitty-gritty details.

With the exception of Saturday, I think itâ€™s apparent that the weekday a projects launches doesnâ€™t have much bearing on the average success rates of projects. There are small fluctuations in success rates from day to day but our statistical analysis shows that these fluctuations are not strong enough to be considered significant.

** Something to note about the graph above is that it does not show the 0% mark on the Y-axis. This is a common method in statistics but an obvious consequence is that is makes the differences between our values â€śappearâ€ť to have a much greater variation visually, that are not actually accurate to the their numerical values. Â Keep this in mind when looking at any graph.Â *

According to the bar graph below, it appears that Kickstarter projects which ended on Friday, had significantly lower success rates, while Kickstarter projects that ended on Tuesday had significantly higher success rates. Â The variation across all other days is not strong enough to conclude a significant difference exists.

You may have a lot of questions at this point. If you really want to understand the data, the process I took to arrive at these results, the statistical analysis I used and the underlying thought behind the analysis, you should read my set of blogs calledÂ **Kickstarter Statistics 101 â€“ A Rough Introduction to Stats via KickstarterÂ **on the**Â Kickstarter Statistics 101Â **landing page. If you just want to know about what the data says, keep reading.

All the data has been compiled and laid out in an easy to understand fashion (you can read a blog about how the data was prepared for analysisÂ here). The bar graphs above shows the average success rates for each weekday. From this we can see that these success rates differ from day to day, but what we really want to know, is if these success rates differ â€śsignificantlyâ€ť.

To find this out if these difference are significant, I ran a simple ANOVA test, which means **AN**alysis **O**f **VA**riance. The ANOVA determines how much discrepancy (or variation) exists within a group of data, and then compares that to the amount of variation that exists across all of the groups analyzed. I will post a blog later to describe this in more detail, but for now a Google search or YouTube search should suffice to provide some basic understanding.

The ANOVA test will return many pieces of information, three of which are important to our conversation here. Those are the F-critical, The F-value and the P-value. Without getting drowned in detail, here is what each of these mean.

F-critical â€“ the F-critical is a threshold. If the F-value calculated from the ANOVA test is greater than the F-critical, then we have reason to believe that our results are significant. If our F-value does not exceed this threshold, than we cannot be sure that our results are significant. The P-value on the other hand gives us an idea of just how significant is our significance. In other terms, it gives us an idea of how often we could return the same result by pure chance.

So we want our P-value to be as low as possible (lower than 0.05 at least but even lower is even better) and we want our F-vale to surpass F-critical by as much as possible!

When I performed the ANOVA on the success rates of the launch data I found an F-critical of 0.913, an F-value of 6.01 and a P-value of 0.00039. These numbers validate that at least one of the days is significantly different and triggering these results. More on this in the next section.

For the end dates we had an F-critical of 2.45, an F-value of 12.11 and a P-value of 1.088E-6 (another very tiny number). Again, these numbers validate that at least one of the days is significantly different.

But now the question remains: Are certain days more significant than others? We can answer this question by determining which days in particular are causing such extreme results.

What I decided to do next, was to remove each day one-by-one, and re-run the ANOVA test. Once the results of the ANOVA test found that the difference in success rates where no longer significant, we would have a pretty good idea of which dayâ€™s really diverged from the norm.

According to the ANOVA, removing Saturday from the project launch data resulting in a p-value of 0.472. This p-value is far too high (it should be lower than 0.05 for our test parameters) to conclude that other days differing by any significant amount.

We achieved a p-value greater than 0.05 for the project end data when we remove both Tuesday and Friday. Thus we can conclude that these two days were the two days influencing the data.

Statistically, can thus assume that thereâ€™s really not much of a difference between the other days even though the average success rates do still differ by just a little bit.

Nothing I say here should be construed as truth or fact, rather a good reason for you to leave a comment explaining your how big of a dummy I am and your own conclusions from the data.

My main conclusion here is that more analysis needs to be done, and probably of data that I currently do not have. I think success rates will give us a vague idea of whatâ€™s happening, but so many other variable effect whether one day is better than another day, and much of that is lost in this analysis.

What would be very interesting is to analyze is the average dollar amount that projects raise on a given day of the week. I think this would be a much better proxy for the â€śactivity levelâ€ť of backer on a given day.

Itâ€™s also difficult because Kickstarter is a global community, and Saturday to me may be Friday or Sunday for someone else. I want to come back and add US only data just to remove this problem.

But I think it is clear that projects launched on Saturdays have significantly lower success rate than those launched on other days. I wonder if this is due to there being less activity on Kickstarter on Saturdays, or something else. What do you think?

As far as Friday being a bad day to end your campaign, I think this is explained well by Jamey Stegmaier at Stonemaier Games on his blog post **here**. He explains that usually people area, â€śfrantically finishing all their work on Fridayâ€ť so this may a good indicator of why success rates are so low when projects end on Fridays.

As far as Tuesday, what do you think?

If you want to know more about how I got the data I am using, or how I prepared the data for analysis, or the meaning of the word â€śsignificantâ€ť I use all the time, click the highlighted links.

So what do you think? Do my interpretations make sense, are there other things to consider? Do you think I could analyze the data in a different way to get better results?

Â

x

x