Why DOS Will Outlive Unix

You still think that the title is a clickbait? Maybe it’s time to reconsider it, because DOS calendar lasts 69 years longer than classic Unix calendar. What if I tell you that in DOS time, minute lasts 32 seconds?

As you can see from the very beginning, this article is going to be somewhat different than usual, as I am going to talk about the concept that most of the people are taking for granted. The concept of time, or more precisely, the way we represent time in our operating systems. Older developers still rember Y2K bug, but not many of them are aware of the Year 2038 problem.

"Historia magisra vitae"

- Cicero

Y2K Bug

To give you a quick context, only two digits were used to represent the year in the early days of computing, which meant that the year 2000 was represented as “00”, but in that time “00” was actually year “1900”. I will not go into details about the Y2K bug. It was solved by adding two more digits to the year representation, but it is a good example of how the things we take for granted can escalate unexpectedly.

Unix Date/Time

Unix time is represented as a number of seconds since 1970-01-01 00:00:00 UTC. As you can see, it is a very simple representation, but it has its limitations. The most significant limitation is that it will overflow in 2038, which means that the time will reset to 1970-01-01 00:00:00 UTC. This is known as the Year 2038 problem. Because the experience of the Y2K bug, this was caught early, and there are already solutions in place to extend the Unix time representation to 64 bits.

So, if we only take the Unix time into account, it will last until 19 January 2038, which is 13 years from now. But what about DOS? We are going to deep dive into the DOS time representation and see how it compares to Unix time.

Numerical Systems

I am pretty sure that most of the readers are familiar with the numerical systems, but just to recap, there are two most commonly used numerical systems in computing: binary and hexadecimal. Binary is a base-2 system, which means that it uses only two digits: 0 and 1. Hexadecimal is a base-16 system, which means that it uses sixteen digits: 0-9 and A-F.

Binary numbers are read from right to left, and each digit has power of its index. The rightmost digit has a power of 0, the next one has a power of 1, and so on. The value of each digit is calculated as the digit multiplied by 2 raised to the power of its index. Every base with the power of 0 iz equal to 1.

To visualize this, let's take a look into a few examples:

Power of 2 3 2 1 0
Binary number 1 1 0 1 0
Decimal value 16 8 0 2 0
Total: 16 + 8 + 0 + 2 + 0 = 26

Step-by-step explanation (right to left):

  • 0 × 20 = 0
  • 1 × 21 = 2
  • 0 × 22 = 0
  • 1 × 23 = 8
  • 1 × 24 = 16

Fancy math formula:

Decimal value = Σ (digit × base^index)

Hexadecimal to Decimal Conversion

Hexadecimal compared to decimal in a table:

Hexadecimal Decimal
00
11
22
33
44
55
66
77
88
99
A10
B11
C12
D13
E14
F15

Hex 1A02 => Decimal?

Power of 16 3 2 1 0
Hexadecimal 1 A 0 2
Decimal value 4096 256 0 2
Total: 4096 + 256 + 0 + 2 = 4354

DOS Date/Time

Now you might wonder, what does all this talk about binary and hexadecimal numbers have to do with DOS date and time? Well, unlike Unix, DOS packs its date and time representation into a compact 32-bit number (16-bit for date and 16-bit for time) by cleverly using bits for different components of the timestamp.

In September 2018, Microsoft published MS-DOS source code on GitHub under the MIT license. Let’s unpack date and time definitions by looking into the source code of the Microsoft DOS.

If we look into the line number 108 of the file: MSDOS.ASM, we will notice the comment that directly describes the date structure:

;   24           2      Date. Bits 0-4=day, bits 5-8=month, bits 9-15=year-1980

What this practically means is that the date is represented with 16-bits (2-bytes), where:

- Bits indexed 0–4 represents Day (5 bits)
- Bits indexed 5–8 represents Month (4 bits)
- Bits 9–15 represents Year offset from 1980 (7 bits)

If you are not familiar with the bit representation, let’s again visualize and explain:

When developing the program in the low-level assembly language, we can access each bit of the number directly. 16-bit number is represented as 2 bytes, where each byte has 8 bits. The first byte is indexed from 0 to 7, and the second byte is indexed from 8 to 15. Since we have access to the every bit, we can leverage that to create logical groups.

Let’s break down this 16-bit number into its three logical components, which are grouped by purpose:

Year (7 bits) Month (4 bits) Day (5 bits)
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

Day (bits 0–4) Represented using 5 bits — allows values from 00000 (0) to 11111 (31). This covers all days of any month.

Month (bits 5–8)
Represented using 4 bits — allows values from 0000 (0) to 1111 (15). Only values 1–12 are valid months, so values 0 and 13–15 are ignored or invalid.

Year (bits 9–15)
Represented using 7 bits — allows values from 0000000 (0) to 1111111 (127).

This is an offset from 1980, which means the range is:

1980 + 0 = 1980 => minimum year 1980 + 127 = 2107 => maximum year

For example, let’s say we want to encode July 11th, 2025:

Day = 11 => Binary = 01011
Month = 7 => Binary = 0111
Year = 2025 - 1980 = 45 => Binary = 0101101

Year: 0 1 0 1 1 0 1
Month: 0 1 1 1
Day: 0 1 0 1 1

Now, we can combine these into a single 16-bit number:

Bits  : 0 1 0 1 1 0 1 0 1 1 1 1 0 1 1

This gives us the final binary representation of the date, which can be stored in a 16-bit integer. Finally, the dates are stored in a packed format, which means that the bits are tightly packed together without any gaps. The day and month are stored with their expected values, and the year is stored as an offset from 1980.

And now we can make sense of the assembly routines provided in the file.

When we call GETDATE DOS adds 1980 to the year value.

SETDATE subtracts 1980 when storing it into the date structure, as you can see in the SETDATE routine.

Oh, I almost forgot to mention that this can, and usually is, represented in hexadecimal format. The date 2025-07-11 would be represented as 0x5AEB.

If you are wondering what on earth is 0x5AEB, and how did we get there, let’s do final visualization which will be conversion of the date 2025-07-11 into hexadecimal format.

As we seen earlier, the final binary representation of the date is:

Bits  : 0101101011101011

To convert this to hexadecimal, we can group the bits into 4-bit chunks:

Bits  : 0101 1010 1110 1011
Hex   : 5 A E B

So, if the number is 5AEB, what is the 0x prefix then? It is just a notation that indicates that the number is in hexadecimal format. It is not part of the number itself, but rather a way to tell the reader that the number is in base-16.

This is called “word”. A word is a fixed-sized unit of data that CPU can handle in one operation. If you are not familiar with the term, you probably heard bout the 32-bit and 64-bit architectures.

Architecture Word Size What it means
8-bit 8 bits 1 byte word
16-bit 16 bits 2-byte word
32-bit 32 bits 4-byte word
64-bit 64 bits 8-byte word

In older systems like DOS, 16-bit words were standard. That’s the reason we have dates, times, and even file pointers stored into 2-byte words.

But wait, there is more! DOS also has a time representation, which is also packed into a 16-bit number, but it is not what you might expect.

DOS Time Representation

The idea behind the storing time is similar to the date representation when it comes to the packing of the bits. However, counting time is a bit more complex than counting days, months, and years. DOS relies on BIOS interrupts to get the current time, and that is where we are going to end our journey.

Bits Field Range
11–15 Hour (24-hour clock) 0 – 23
5–10 Minute 0 – 59
0–4 Second ÷ 2
(stores even seconds)
0 – 29 => 0 – 58 seconds