'\" t
.\" Title: pg_test_timing
.\" Author: The PostgreSQL Global Development Group
.\" Generator: DocBook XSL Stylesheets v1.79.1 <http://docbook.sf.net/>
.\" Date: 2021
.\" Manual: PostgreSQL 9.6.24 Documentation
.\" Source: PostgreSQL 9.6.24
.\" Language: English
.\"
.TH "PG_TEST_TIMING" "1" "2021" "PostgreSQL 9.6.24" "PostgreSQL 9.6.24 Documentation"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
.ad l
.\" -----------------------------------------------------------------
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
pg_test_timing \- measure timing overhead
.SH "SYNOPSIS"
.HP \w'\fBpg_test_timing\fR\ 'u
\fBpg_test_timing\fR [\fIoption\fR...]
.SH "DESCRIPTION"
.PP
pg_test_timing
is a tool to measure the timing overhead on your system and confirm that the system time never moves backwards\&. Systems that are slow to collect timing data can give less accurate
\fBEXPLAIN ANALYZE\fR
results\&.
.SH "OPTIONS"
.PP
pg_test_timing
accepts the following command\-line options:
.PP
\fB\-d \fR\fB\fIduration\fR\fR
.br
\fB\-\-duration=\fR\fB\fIduration\fR\fR
.RS 4
Specifies the test duration, in seconds\&. Longer durations give slightly better accuracy, and are more likely to discover problems with the system clock moving backwards\&. The default test duration is 3 seconds\&.
.RE
.PP
\fB\-V\fR
.br
\fB\-\-version\fR
.RS 4
Print the
pg_test_timing
version and exit\&.
.RE
.PP
\fB\-?\fR
.br
\fB\-\-help\fR
.RS 4
Show help about
pg_test_timing
command line arguments, and exit\&.
.RE
.SH "USAGE"
.SS "Interpreting results"
.PP
Good results will show most (>90%) individual timing calls take less than one microsecond\&. Average per loop overhead will be even lower, below 100 nanoseconds\&. This example from an Intel i7\-860 system using a TSC clock source shows excellent performance:
.sp
.if n \{\
.RS 4
.\}
.nf
Testing timing overhead for 3 seconds\&.
Per loop time including overhead: 35\&.96 nsec
Histogram of timing durations:
< usec % of total count
1 96\&.40465 80435604
2 3\&.59518 2999652
4 0\&.00015 126
8 0\&.00002 13
16 0\&.00000 2
.fi
.if n \{\
.RE
.\}
.PP
Note that different units are used for the per loop time than the histogram\&. The loop can have resolution within a few nanoseconds (nsec), while the individual timing calls can only resolve down to one microsecond (usec)\&.
.SS "Measuring executor timing overhead"
.PP
When the query executor is running a statement using
\fBEXPLAIN ANALYZE\fR, individual operations are timed as well as showing a summary\&. The overhead of your system can be checked by counting rows with the
psql
program:
.sp
.if n \{\
.RS 4
.\}
.nf
CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
\etiming
SELECT COUNT(*) FROM t;
EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
.fi
.if n \{\
.RE
.\}
.PP
The i7\-860 system measured runs the count query in 9\&.8 ms while the
\fBEXPLAIN ANALYZE\fR
version takes 16\&.6 ms, each processing just over 100,000 rows\&. That 6\&.8 ms difference means the timing overhead per row is 68 ns, about twice what pg_test_timing estimated it would be\&. Even that relatively small amount of overhead is making the fully timed count statement take almost 70% longer\&. On more substantial queries, the timing overhead would be less problematic\&.
.SS "Changing time sources"
.PP
On some newer Linux systems, it\*(Aqs possible to change the clock source used to collect timing data at any time\&. A second example shows the slowdown possible from switching to the slower acpi_pm time source, on the same system used for the fast results above:
.sp
.if n \{\
.RS 4
.\}
.nf
# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
tsc hpet acpi_pm
# echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
# pg_test_timing
Per loop time including overhead: 722\&.92 nsec
Histogram of timing durations:
< usec % of total count
1 27\&.84870 1155682
2 72\&.05956 2990371
4 0\&.07810 3241
8 0\&.01357 563
16 0\&.00007 3
.fi
.if n \{\
.RE
.\}
.PP
In this configuration, the sample
\fBEXPLAIN ANALYZE\fR
above takes 115\&.9 ms\&. That\*(Aqs 1061 nsec of timing overhead, again a small multiple of what\*(Aqs measured directly by this utility\&. That much timing overhead means the actual query itself is only taking a tiny fraction of the accounted for time, most of it is being consumed in overhead instead\&. In this configuration, any
\fBEXPLAIN ANALYZE\fR
totals involving many timed operations would be inflated significantly by timing overhead\&.
.PP
FreeBSD also allows changing the time source on the fly, and it logs information about the timer selected during boot:
.sp
.if n \{\
.RS 4
.\}
.nf
# dmesg | grep "Timecounter"
Timecounter "ACPI\-fast" frequency 3579545 Hz quality 900
Timecounter "i8254" frequency 1193182 Hz quality 0
Timecounters tick every 10\&.000 msec
Timecounter "TSC" frequency 2531787134 Hz quality 800
# sysctl kern\&.timecounter\&.hardware=TSC
kern\&.timecounter\&.hardware: ACPI\-fast \-> TSC
.fi
.if n \{\
.RE
.\}
.PP
Other systems may only allow setting the time source on boot\&. On older Linux systems the "clock" kernel setting is the only way to make this sort of change\&. And even on some more recent ones, the only option you\*(Aqll see for a clock source is "jiffies"\&. Jiffies are the older Linux software clock implementation, which can have good resolution when it\*(Aqs backed by fast enough timing hardware, as in this example:
.sp
.if n \{\
.RS 4
.\}
.nf
$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
jiffies
$ dmesg | grep time\&.c
time\&.c: Using 3\&.579545 MHz WALL PM GTOD PIT/TSC timer\&.
time\&.c: Detected 2400\&.153 MHz processor\&.
$ pg_test_timing
Testing timing overhead for 3 seconds\&.
Per timing duration including loop overhead: 97\&.75 ns
Histogram of timing durations:
< usec % of total count
1 90\&.23734 27694571
2 9\&.75277 2993204
4 0\&.00981 3010
8 0\&.00007 22
16 0\&.00000 1
32 0\&.00000 1
.fi
.if n \{\
.RE
.\}
.SS "Clock hardware and timing accuracy"
.PP
Collecting accurate timing information is normally done on computers using hardware clocks with various levels of accuracy\&. With some hardware the operating systems can pass the system clock time almost directly to programs\&. A system clock can also be derived from a chip that simply provides timing interrupts, periodic ticks at some known time interval\&. In either case, operating system kernels provide a clock source that hides these details\&. But the accuracy of that clock source and how quickly it can return results varies based on the underlying hardware\&.
.PP
Inaccurate time keeping can result in system instability\&. Test any change to the clock source very carefully\&. Operating system defaults are sometimes made to favor reliability over best accuracy\&. And if you are using a virtual machine, look into the recommended time sources compatible with it\&. Virtual hardware faces additional difficulties when emulating timers, and there are often per operating system settings suggested by vendors\&.
.PP
The Time Stamp Counter (TSC) clock source is the most accurate one available on current generation CPUs\&. It\*(Aqs the preferred way to track the system time when it\*(Aqs supported by the operating system and the TSC clock is reliable\&. There are several ways that TSC can fail to provide an accurate timing source, making it unreliable\&. Older systems can have a TSC clock that varies based on the CPU temperature, making it unusable for timing\&. Trying to use TSC on some older multicore CPUs can give a reported time that\*(Aqs inconsistent among multiple cores\&. This can result in the time going backwards, a problem this program checks for\&. And even the newest systems can fail to provide accurate TSC timing with very aggressive power saving configurations\&.
.PP
Newer operating systems may check for the known TSC problems and switch to a slower, more stable clock source when they are seen\&. If your system supports TSC time but doesn\*(Aqt default to that, it may be disabled for a good reason\&. And some operating systems may not detect all the possible problems correctly, or will allow using TSC even in situations where it\*(Aqs known to be inaccurate\&.
.PP
The High Precision Event Timer (HPET) is the preferred timer on systems where it\*(Aqs available and TSC is not accurate\&. The timer chip itself is programmable to allow up to 100 nanosecond resolution, but you may not see that much accuracy in your system clock\&.
.PP
Advanced Configuration and Power Interface (ACPI) provides a Power Management (PM) Timer, which Linux refers to as the acpi_pm\&. The clock derived from acpi_pm will at best provide 300 nanosecond resolution\&.
.PP
Timers used on older PC hardware include the 8254 Programmable Interval Timer (PIT), the real\-time clock (RTC), the Advanced Programmable Interrupt Controller (APIC) timer, and the Cyclone timer\&. These timers aim for millisecond resolution\&.
.SH "SEE ALSO"
\fBEXPLAIN\fR(7)