Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information
Bottomhole pressure and temperature gauges
Metrology is the science and process of ensuring that a measurement meets specified degrees of accuracy and precision. Bottomhole pressure-gauge and temperature-gauge performance depends on the static and dynamic metrological parameters described here. The pressure measurement equipment consists of the pressure transducer, associated electronics, and telemetry. Each component uniquely influences the measurement quality.
Static metrological parameters
The main static metrological parameters affecting gauge performance are:
Accuracy is the maximum pressure error exhibited by the pressure transducer under the following applied conditions: fitting error, pressure hysteresis, and repeatability. The fitting error, also called the mean quadratic deviation (MQD), is a measure of the quality of the mathematical fit of the sensor response at a constant temperature. Pressure hysteresis is the maximum discrepancy of the transducer signal output between increasing and decreasing pressure excursions. Repeatability is defined as the discrepancy between two consecutive measurements of a given pressure at the same temperature.
Resolution is the minimum pressure change detected by the sensor. When referring to the resolution of a bottomhole pressure gauge, it is important to account for the associated electronics, because the gauge is always used in series with the electronics. Thus, the resolution of the measurement is the lower of the resolution of the gauge and its electronics. Another important consideration is that the resolution must be evaluated with respect to a specific sampling rate, because an increase of the sampling rate worsens the resolution. The electronic noise of strain-gauge transducers is often the major factor affecting resolution. Mechanically induced noise may further limit gauge resolution because some gauges behave like microphones or accelerometers. This effect may be significant during tests when there is fluid or tool movement downhole.
A pressure sensor is stable if it can retain its performance characteristics for a relatively long time period. Stability is quantified by the sensor mean drift (psi/D) at a given pressure and temperature. Three levels of stability can be defined: short-term stability for the first day of a test, medium-term stability for the following 6 days, and long-term stability for a minimum of one month.
Sensitivity is the ratio of the transducer output variation induced by a change in pressure to that change in pressure. The ratio represents the slope of the line produced by a plot of the transducer output vs. pressure input. The plotted sensitivity should be, but is not always, linear with respect to pressure.
Dynamic metrological parameters
Four aspects are used to evaluate the dynamic metrology of pressure gauges.
Transient response during temperature variation
The sensor response is monitored under dynamic temperature conditions while holding the applied pressure constant. The transient response characterizes the time required to get a reliable pressure measurement for a given temperature variation. Peak error and stabilization time are calculated.
Transient response during pressure variation
The sensor response is recorded before and after a pressure variation at a constant temperature. Peak error and stabilization time are calculated.
Dynamic response during pressure and temperature shock
The sensor response is recorded before and after a temperature shock.
Dynamic temperature correction on the pressure measurement
The most advanced quartz-gauge transducers are based on a single-crystal sensor design. The crystal is activated on two distinct resonating modes, which are sensitive to both pressure and temperature but with different sensitivities. An advantage of this design is that the measured temperature is the temperature of the crystal. "Dynamic temperature correction" is used to adjust the pressure measurement of single-crystal sensors in real time for any remnant temperature effect. The nonuniform temperature of the crystal, especially while undergoing strong pressure or temperature variations or both, may induce such effects.
Calibration and standard evaluation tests for pressure gauges
Calibration is essential for obtaining good temperature and pressure data. To ensure that a pressure gauge gives a pressure as close as possible to the real pressure over its entire operating range, it must be calibrated. Calibration involves establishing transfer functions to convert raw output from the pressure and temperature data channels into scaled pressure and temperature readings. These transfer functions are 2D (in pressure and temperature) polynomial models—the degree of which is a function of the accuracy required for the measurement.
The calibration process consists of applying known pressures and temperatures to the desired operational ranges. The raw pressure and temperature output signals are received and entered into a polynomial optimization routine. Input pressures are applied with a dead-weight tester, and input temperatures are generated by an oil bath. The following steps are required for a complete master calibration.
Choosing the pressure-temperature calibration points
Clearly, more calibration points yield a more accurate calibration; however, the inertia in temperature equilibration is a limiting factor. The best practice is to never use fewer than 100 pressure-temperature calibration points and distribute the points in a scheduled time routine, such as shown in Fig. 1.
Deriving the pressure calibration function
The pressure calibration function is a polynomial of order N in pressures and order M in temperatures:
in which the Aij calibration coefficients are determined by a linear regression providing a least-squares minimization; Sp and St are the pressure and temperature outputs, respectively; and Spo and Sto are the corresponding offsets. Usually the number of Aij coefficients can be limited to approximately 15. During this step, the peak error and MQD are determined.
Temperature calibration function
It is often useful, though not always necessary, to calibrate the transducer to output a scaled temperature measurement. The temperature calibration function is a polynomial of order N in temperatures and order M in pressures:
in which the Aij′ calibration coefficients are determined by a linear regression providing a least-squares minimization; Sp, St, Spo, and Sto are as described previously.
Determining nonlinearity in pressures and temperatures
Several other tests supplement the master calibration:
- Pressure thermal sensitivity. The pressure thermal sensitivity represents the error (psi) that results if the temperature measurement is in error 1°C.
- Maximum hysteresis during the calibration cycle. This test is determined from calibration data.
- Calibration check. A calibration check verifies the consistency of the sensor readings when the applied pressures and temperatures are different from those used during the calibration cycle. The calibration check is performed in the laboratory at the time the sensor is evaluated and is essentially a rerun of a master calibration.
- Other procedures and tests. Standard procedures are typically used in evaluating pressure transducers to compare different technologies and certify the calibration parameters. The most commonly used standard procedures are as follows:
- Complete master calibration
- Calibration check
- Stability tests: middle term and long term
- Repeatability test
- Resolution test
- Noise or short-term stability test
- Dynamic tests: temperature shock, temperature transient, temperature response time, and pressure shock
Metrology of temperature gauges
When the temperature is used to correct pressure gauge readings, it must come from the pressure-sensing element, not from the wellbore fluid. On the other hand, bottomhole-fluid temperature measurements are performed with sensors that are in immediate contact with the wellbore fluid and have a minimum thermal inertia (1 or 2 seconds) to follow the variations of the fluid temperature as closely as possible. For this reason, temperature measurements available from pressure-gauge technology are rarely valid for traditional wellbore temperature profiling, which uses the wellbore fluid temperature as a diagnostic tool to detect anomalies in the expected flow patterns in and around the wellbore.
Typical wellbore-fluid temperature measurements have a resolution in the range of 0.05°F and accuracy in the range of 1°F. Accuracy in thermometry is not always a prerequisite because temperature measurements are often normalized between themselves—e.g., from pass to pass or from flowing run to shut-in run in production logging applications. Accuracy is necessary, however, to compare absolute bottomhole temperatures to draw geothermal maps; to design temperature-sensitive operations, such as stimulation or drilling operations using temperature-sensitive chemicals; or to operate close to the limitations of equipment, such as in geothermal or other high-temperature oil and gas fields.
Resolution is of paramount importance for applications such as the diagnosis of flowing wells or when the measured temperature is the temperature of a pressure-sensing element—the reading of which is affected by minute changes in sensor temperature. High-resolution wellbore-fluid thermometry is also used in extremely slanted and horizontal wells, in which true vertical depth (TVD) variations, and therefore geothermal temperature variations, are small.
Noteworthy papers in OnePetro
Use this section to list papers in OnePetro that a reader who wants to learn more should definitely read
Use this section to provide links to relevant material on websites other than PetroWiki and OnePetro