When using SQL Server to process timestamps, it is critical to understand its precision limitations.
Consider the following table structure:
<code class="language-sql">CREATE TABLE [TESTTABLE] ( [ID] [int] IDENTITY(1,1) NOT NULL, [DateField] [datetime] NULL, [StringField] [varchar](50), [IntField] [int] NULL, [BitField] [bit] NULL )</code>
Execute the following code:
<code class="language-sql">BEGIN INSERT INTO TESTTABLE (IntField, BitField, StringField, DateField) VALUES ('1', 1, 'hello', {ts '2009-04-03 15:41:27.378'}); SELECT SCOPE_IDENTITY() END</code>
The expected behavior is to store the timestamp with full millisecond precision. However, when querying the table using the following statement:
<code class="language-sql">select * from testtable with (NOLOCK)</code>
You may notice that the DateField column displays:
<code>2009-04-03 15:41:27.*377*</code>
SQL Server only stores approximately 1/300 of a second. These time values always fall on 0, 3, and 7 milliseconds. For example:
<code>00:00:00.000<br></br> 00:00:00.003<br></br> 00:00:00.007<br></br> 00:00:00.010<br></br> 00:00:00.013<br></br> ...</code>
If millisecond precision is critical, there is no direct solution in SQL Server's native data types. You can consider the following methods:
The above is the detailed content of How Accurate is Millisecond Precision in SQL Server's `datetime` Data Type?. For more information, please follow other related articles on the PHP Chinese website!