If you use applied statistics in your career, odds are you’ve used the Great Assumption Of Our Era, the assumption of the Normal distribution. There are some good reasons for this. The Central Limit Theorem is usually thrown in there as a justification, and it works reasonably well for practical applications. But the Central Limit [...]

One of the primary goals of statistical process control is to reduce the probability of a “defect,” however you define it, to acceptable levels.

Probably the most widely known example is Six Sigma, which aims to keep the number of defects below 3.4 per million. (More on that later, considering that it technically corresponds to 4.5 sigma.)

Defects are often measured in PPM (parts per million), but statistical processes are usually understood in terms of standard deviations (sigma).

The terminology DPMO (defects per million opportunities) is also sometimes used in place of PPM, but it means essentially the same thing.

It should go without saying that being able to convert back and forth between PPM and sigma can be very handy. I went ahead and put together a “calculator” for you to accomplish this (okay, it’s an Excel spreadsheet). You can use the calculator to convert between PPM and sigma in both traditional Statistical Process Control and Six Sigma (and yes, the results are different).

To get the calculator, and to sign up for more updates like this one, go ahead and sign up using the form below. (I will never share your email address with anybody.) Or you can go ahead and read on to understand how this conversion works, and set it up yourself.

Repeat after me: “statistical significance is not everything.” It’s just as important to have some measure of how practically significant an effect is, and this is done using what we call an effect size. Cohen’s d is one of the most common ways we measure the size of an effect. Here, I’ll show you how to calculate it. [...]

Hey everybody. I’ve added another resource to the “Downloads” section of the site up top by adding a new spreadsheet. This one makes it easy to combine standard deviations from multiple samples, even if you don’t have the raw data. You can pick it up here.

Today I’m going to finish up this series on data analysis in Excel. This time around, I’ll cover all the basic statistics like correlation, covariance, descriptive statistics, and so on. We’ll also talk about a few miscellaneous tools for exponential smoothing,Fourier analysis, moving averages, random number generation, rank and percentile, and sampling.

If you’ve ever tried to set up a legitimate statistical test in Excel, you already know it’s painful, but if you have the Analysis ToolPak enabled, things get a bit easier. Today, we’re going to learn how to run statistical tests in Excel. We’ll cover F-tests to compare variances, t-tests to compare 2 averages, and ANOVAs to compare multiple averages.

Statistical analysis in Excel is a huge pain unless you know how to enable the Analysis Toolpak. In part 1 of this series on data analysis in Excel, I’m going to tell you how to do that. Next, we’ll talk about regression analysis (the real thing, including multiple variables, not just fitting a line to a graph). This whole post should take 20 to 40 minutes.

The p-value, while it is one of the most widely-used and important concepts in statistics, is actually widely misunderstood. Today we’ll talk about what it is, and how to obtain it.

(If you’re in a statistics class, or using this stuff out there in the real world, consider ordering “Statistics in Plain English” by Timothy Urdan. It’s got the readability of the Idiot’s Guide on the same subject, and (thank God) a non-textbook price, but without the glaring mistakes.)

Welcome back! This is the last part of the series on relativity. Today we’ll talk about how gravity effects time and space. You might want to look at last week’s post before you read this one. It explains the basics of gravity in General Relativity. Go back to the Theory of Relativity for Kids if you want to start at the very beginning.