# Apparent Magnitude and Absolute Magnitude Scale

The apparent magnitude and absolute magnitude scale refers to two systems of quantifying the amount of light emanating from an astrophysical source. The apparent magnitude refers to a quantity that is directly measured by the observer and that depends on the distance between the emitter and the observer. The absolute magnitude is inferred and is a measure of the intrinsic power of the source (and as such it does not depend on the distance between the observer and the source).

Thus the apparent magnitude is like an apparent luminosity (or power) and the absolute magnitude is like an intrinsic luminosity (or power). Recall that flux ($F$) and luminosity ($L$) are related by the inverse-square law:

$F = \frac{L}{4\pi r^{2}}$

and apparent magnitude is defined by:

${\rm Apparent \ Magnitude} \ \equiv \ m \ = \ -2.5\log{F} + K$

where $r$ is the distance between the source and observer, and where the logarithm is in base 10 and $K$ is a constant that has to be defined by convention. The number 2.5 comes from the fact that historically, when the naked eye was used as the observing device, 5 magnitudes corresponded to roughly a factor of 100 difference in intrinsic luminosity, and since luminosity and distance are related by a distance-squared, $2.5=5/2$.

The absolute magnitude is defined as the apparent magnitude at a distance of 10 parsecs (10 pc, or about 32.6 light years). In other words all objects are taken to a distance of 10 pc so that their magnitudes and luminosities can be directly compared, taking out any effects of the diminishing of observed flux with distance.

${\rm Absolute \ magnitude} \ \equiv \ M \ = \ -2.5\log{F(r=10 \ {\rm pc})} + K$

So

$m – M \ = \ \log{\left[ \left( \frac{L}{4\pi (10 \ {\rm pc})^{2}} \right)^{-\frac{5}{2}} \ \div \ \left( \frac{L}{4\pi r^{2}} \right)^{-\frac{5}{2}} \right]}$

or

$m – M \ = \ 5\log{\left(\frac{r}{10 \ {\rm pc}}\right)} \ = \ 5(\log{r} -1)$

Note that the difference in apparent and absolute magnitudes does not depend on the convention constant ($K$). Also, since magnitudes are defined in terms of logarithms, the difference between two apparent magnitudes corresponds to a ratio of fluxes, and a difference between two absolute magnitude corresponds to a ratio of intrinsic luminosities, and neither of these ratios depends on $K$.

What has not been mentioned above is that the flux, luminosity, and magnitude of a source (whether apparent or absolute) obviously depend on the wavelength (or energy) range that is chosen to accumulate photons/light. This wavelength range (or bandpass) must absolutely be specified. Certain standard bandpasses have become conventional and acquired names (often a single letter). Selection and calibration of the constant $K$ for different wavebands is a gruesome and complex business and different conventions have been deployed depending on the application and the researchers. If you are interested in the details, consult this page on the UMD website.

Example: what is the ratio of fluxes of two stars with apparent magnitudes of $-15$ and $-13.5$?

Solution:

$m_{1} – m_{2} = -\frac{5}{2}\log{[F_{1}/F_{2}]}$

or

$10^{2([-13.5-(-15.0)]/5)} = 10^{(3.0/5.0)} = 3.98$