Sean Carroll in Cosmic Variance:
Error bars are a simple and convenient way to characterize the expected uncertainty in a measurement, or for that matter the expected accuracy of a prediction. In a wide variety of circumstances (though certainly not always), we can characterize uncertainties by a normal distribution — the bell curve made famous by Gauss. Sometimes the measurements are a little bigger than the true value, sometimes they’re a little smaller. The nice thing about a normal distribution is that it is fully specified by just two numbers — the central value, which tells you where it peaks, and the standard deviation, which tells you how wide it is. The simplest way of thinking about an error bar is as our best guess at the standard deviation of what the underlying distribution of our measurement would be if everything were going right. Things might go wrong, of course, and your neutrinos might arrive early; but that’s not the error bar’s fault.
Now, there’s much more going on beneath the hood, as any scientist (or statistician!) worth their salt would be happy to explain. Sometimes the underlying distribution is not expected to be normal. Sometimes there are systematic errors. Are you sure you want the standard deviation, or perhaps the standard error? What are the error bars on your error bars?
More here.