Meaning of "precision" in unit specification

Hi, when i change the precision in units setting, it doesn’t work well in the image.

This works as it should. Precision = 4 means that 4 digits are used for displaying length, and this is what you see (5580 and 4233).

how can i make it accurate to one decimal places?

There is no fixed decimal display mode. Can you tell a bit about what you are trying to achieve so that we can understand why the current display mode is not appropriate?

i want to make the image show fixed decimal measurement, such as 5.6mm and 42.3mm in this picture

We need to understand what you are trying to achieve and why the current behavior is problematic for you.

In this picture, the decimal places is not unified;

when we say precision of decimal, we mean that the measurement is accurate to the same decimal places, such as 5.58mm and 42.33mm in this image;

i want the image show the unified decimal places for measurement and the current setting obvious behave in another way.

It is clear what you want, but we need to understand why because they’re are many ways how this could be achieved and we need to choose solutions that address the underlying need and not just match a particular appearance. For example, these completely different solutions could all result in line length measurements appearing with the same number of decimals:

  • A. add a fixed decimal display mode
  • B. allow override of display format or precision value for each measurement
  • C. set line length measurement display precision based on the current zoom factor, or based on what object was visible where you made the measurement (e.g., measure where a high-resolution volume or a continuous surface is visible could result in higher number of decimals displayed).

There are good reasons for using the current precision definition. Primarily, it corresponds to a specific error percentage. For example, precision=4 means that you display the value with +/- 0.1% error. Note that these display settings are used everywhere throughout the application, not just when you measure length in an image but when you measure length on a continuous surface (where there is no limitation on precision) or when displaying measurement results or taking length value as input. When you interactively measure length in a view the you typically use a higher zoom factor and you want to see smaller length differences.

We need to understand why the current, generally reasonable behavior is not suitable for your use case. For example, if the distance that you measure is comparable to the voxel size then you may not want to claim that your shorter distance measurements are more accurate. Is this the reason for wanting a fixed number of decimals? Why is it a problem that more digits are displayed for shorter distances? Would you want to use this fixed decimals display for every length value inputs and outputs the application or just for line length measurements? Would you always use the same number of decimals for all length measurements in your scene, or you would use a different value depending on the resolution of the underlying image that you used for that specific measurement?

it is just personal habit, nothing special underlining the thought to unify the decimal places, thank you very much for you detailed reply.