File Size Converter

Convert between decimal and binary file size units with precision based on IEC 80000-13 standards.

Input Value

Conversion Result

Enter a value to see conversion
Note: Decimal units (KB, MB, GB) use powers of 10. Binary units (KiB, MiB, GiB) use powers of 2 per IEC 80000-13.

Digital storage measurement presents a fundamental nomenclature challenge in computing. The industry uses two competing systems: decimal prefixes (KB, MB, GB) based on powers of 10 and binary prefixes (KiB, MiB, GiB) based on powers of 2. This discrepancy creates confusion when a marketed 1 TB hard drive provides only 931 GiB of usable space. Understanding the mathematical foundation of both systems is essential for accurate capacity planning. The distinction is not merely semantic; it represents a 7.4% difference at the gigabyte level that compounds with larger storage volumes. Storage manufacturers typically advertise capacity using decimal notation, while operating systems often report capacity using binary calculations, leading to apparent discrepancies that are actually consistent with established standards.

The mathematical basis for this confusion stems from the proximity of 1,000 and 1,024. Early computer engineers used "kilobyte" to mean 1,024 bytes (2¹⁰) because binary systems naturally work with powers of 2. However, the International System of Units (SI) defines "kilo" as exactly 1,000 (10³). As storage capacities grew into megabytes, gigabytes, and terabytes, the percentage difference between the two systems became increasingly significant. A megabyte calculated as 1,000,000 bytes differs from a mebibyte of 1,048,576 bytes by approximately 4.9%. At the terabyte scale, this divergence reaches approximately 10%, representing substantial capacity differences in large storage deployments.

Decimal versus binary measurement systems

The decimal system follows SI prefix conventions where each unit is 1,000 times the previous unit. One kilobyte equals 1,000 bytes (10³), one megabyte equals 1,000,000 bytes (10⁶), one gigabyte equals 1,000,000,000 bytes (10⁹), and one terabyte equals 1,000,000,000,000 bytes (10¹²). This system aligns with standard metric prefixes used throughout science and engineering. Storage device manufacturers adopted this notation because it matches consumer expectations from other measurement contexts and results in larger advertised numbers. The decimal system provides straightforward calculations: a 500 GB drive contains exactly 500,000,000,000 bytes, requiring no conversion factors or memorization of powers of 2.

The binary system reflects how computer memory is actually addressed and organized. One kibibyte equals 1,024 bytes (2¹⁰), one mebibyte equals 1,048,576 bytes (2²⁰), one gibibyte equals 1,073,741,824 bytes (2³⁰), and one tebibyte equals 1,099,511,627,776 bytes (2⁴⁰). Operating systems, particularly Windows, traditionally display storage capacity using these binary calculations while labeling the results with decimal unit abbreviations (showing "GB" when technically measuring GiB). This practice perpetuates confusion despite the IEC 80000-13 standard establishing distinct binary prefix notation to resolve the ambiguity.

Unit TypeSymbolBytes (Decimal)Bytes (Binary)Power
KilobyteKB1,00010³
KibibyteKiB1,0242¹⁰
MegabyteMB1,000,00010⁶
MebibyteMiB1,048,5762²⁰
GigabyteGB1,000,000,00010⁹
GibibyteGiB1,073,741,8242³⁰
TerabyteTB1,000,000,000,00010¹²
TebibyteTiB1,099,511,627,7762⁴⁰

Standards development and historical context

The International Electrotechnical Commission formalized binary prefix notation through the IEC 80000-13 standard in 1998, later revised in 2008. This standard introduced the prefixes kibi-, mebi-, gibi-, tebi-, pebi-, and exbi- specifically for binary multiples of the byte. The motivation was eliminating decades of ambiguity where "kilobyte" could mean either 1,000 or 1,024 bytes depending on context. Prior to standardization, the computer industry used KB, MB, and GB inconsistently: memory manufacturers typically meant binary values while storage manufacturers increasingly adopted decimal values. Technical documentation often failed to specify which convention applied, leading to miscommunication and disputes over advertised versus delivered capacity.

Adoption of the IEC binary prefix notation has been gradual and incomplete. Linux distributions and various technical tools now properly display GiB when calculating using binary multiples. The National Institute of Standards and Technology (NIST) endorses the IEC prefixes for U.S. technical documentation. However, Windows continues displaying "GB" for values calculated as GiB, perpetuating the confusion the standard was designed to resolve. Storage manufacturers uniformly use decimal notation, which is their prerogative under the standard, but must clearly communicate this choice. The industry transition remains ongoing, with technical audiences increasingly familiar with binary prefixes while general consumers primarily encounter decimal units in marketing materials.

Advertised CapacityActual BytesOS Display (GiB)Difference
128 GB SSD128,000,000,000119.2 GiB-6.9%
256 GB SSD256,000,000,000238.4 GiB-6.9%
500 GB HDD500,000,000,000465.7 GiB-6.9%
1 TB HDD1,000,000,000,000931.3 GiB-6.9%
2 TB HDD2,000,000,000,0001,862.6 GiB-6.9%
4 TB HDD4,000,000,000,0003,725.3 GiB-6.9%

Practical applications and use cases

Storage capacity verification requires understanding both measurement systems to reconcile advertised specifications with actual available space. When purchasing a drive advertised as 1 TB, users should expect the operating system to report approximately 931 GiB if it uses binary notation. This is not defective hardware or false advertising but rather a predictable mathematical relationship. System administrators planning storage deployments must account for this discrepancy when calculating actual usable capacity. A storage array advertised as 100 TB provides approximately 90.95 TiB of addressable space when viewed through binary measurement, representing a significant planning consideration for data center operations.

Network bandwidth calculations demonstrate another practical application where unit confusion creates misunderstandings. Internet service providers typically advertise speeds in megabits per second (Mbps) using decimal notation: 100 Mbps equals 100,000,000 bits per second. File download speeds displayed by software often show megabytes per second, which may be calculated as either decimal MB or binary MiB depending on the application. Converting 100 Mbps to theoretical maximum download speed requires dividing by 8 (bits to bytes), yielding 12.5 MB/s or approximately 11.92 MiB/s. Protocol overhead, network congestion, and other factors reduce actual throughput below these theoretical maximums, but understanding the unit conversions prevents confusion about expected performance.

Memory specifications traditionally use binary notation exclusively. A module advertised as 8 GB of RAM contains 8,589,934,592 bytes (8 GiB), not 8,000,000,000 bytes. This convention reflects how memory addressing works in binary systems where address lines naturally create capacities in powers of 2. However, marketing materials do not always use the IEC binary prefix notation, instead labeling binary quantities with decimal unit symbols. Technical documentation for memory modules should specify whether stated capacities follow binary or decimal conventions, though industry practice strongly favors binary measurements for all memory products regardless of labeling.

File transfer time estimation requires accurate conversion between storage sizes and transfer rates. Transferring a 4.7 GB DVD image (4,700,000,000 bytes) over a gigabit Ethernet connection operating at 125 MB/s theoretical maximum takes approximately 37.6 seconds under ideal conditions. Real-world performance factors including file system overhead, network protocol efficiency, and drive performance reduce actual speeds, but the calculation demonstrates the conversion methodology. Data center administrators performing bulk migrations must account for these calculations when scheduling maintenance windows and estimating completion times for large-scale data movement operations.

Related Tools

Frequently Asked Questions

What is the difference between GB and GiB?

GB (gigabyte) uses the decimal system where 1 GB equals 1,000,000,000 bytes (10⁹), following SI prefixes. GiB (gibibyte) uses the binary system where 1 GiB equals 1,073,741,824 bytes (2³⁰). The IEC 80000-13 standard established binary prefixes (KiB, MiB, GiB) to eliminate confusion. This results in approximately 7.4% difference between GB and GiB measurements.

Why do hard drives show less space than advertised?

Storage manufacturers advertise capacity using decimal units (1 TB = 1,000,000,000,000 bytes) while operating systems typically display capacity using binary units (1 TiB = 1,099,511,627,776 bytes). A 1 TB drive contains 1,000,000,000,000 bytes, which the operating system reports as approximately 931 GiB. This is not false advertising but rather a measurement system discrepancy defined by industry standards.

Which system should be used for technical documentation?

The IEC 80000-13 standard recommends using binary prefixes (KiB, MiB, GiB) when referring to quantities that are powers of 2, and decimal prefixes (KB, MB, GB) for powers of 10. Technical documentation should specify which system is being used to avoid ambiguity. Storage device specifications typically use decimal units, while memory specifications commonly use binary units, though explicit notation eliminates confusion.

Tool Vault · File Size Converter · Standards-based conversion between decimal and binary storage units.