Underflow occurs in computing when a number is too small to be represented within the available number of bits, leading to a loss of precision or incorrect calculations. It is especially prevalent in floating-point arithmetic and can cause significant errors in scientific computations or financial applications.