Seconds vs. milliseconds
The most common timestamp mistake is confusing 10-digit Unix seconds with 13-digit milliseconds. If one system stores seconds and another expects milliseconds, the displayed date can jump decades forward or backward. A good first check is simply to count digits and then verify the converted date looks plausible.
Why timezone confusion happens
The underlying Unix timestamp is timezone-neutral, but the displayed date depends on the formatter. One dashboard may show UTC, another may show local time, and a log viewer may apply server timezone settings. Teams can end up arguing about different clock times even when they are discussing the same event.
A reliable debugging habit
- Convert the timestamp into both local time and UTC.
- Write down the unit used by the source system.
- Share the converted time with a timezone label when discussing incidents.
- Use ISO output when you need an unambiguous machine-friendly representation.
Where timestamps appear most often
- API payloads and webhook events.
- Database rows and analytics exports.
- Application logs and queue processing records.
- Scheduling systems such as cron-like jobs and delayed tasks.
FAQ
Can a Unix timestamp include timezone information?
The raw number does not include a timezone. Timezone only appears when software formats the number into a readable date and time string.
Why does one log line look one hour off?
Daylight saving rules, local timezone settings, or a mixed UTC/local display policy can all create apparent offsets.