Last answered:

29 Jan 2024

Posted on:

29 Jan 2024

0

Resolved: Float and double, single precision and double precision

Even after the resolved comment the float and double is still not understandable.. what is the meaning of single precision and double precision? What is the difference between that? Does it mean it has a precision of one and two? If it is, And what does the maximum digits mean? And here doesn't byte mean one symbol ~ one byte? Can you Please explain?

1 answers ( 1 marked as helpful)
Instructor
Posted on:

29 Jan 2024

2

Hi Shalini!
Thanks for reaching out.


Single precision and double precision refer to the precision of floating-point numbers in computing. Single precision, often denoted as "float," uses 32 bits to represent a floating-point value, providing about 7 decimal digits of precision. Double precision, often denoted as "double," uses 64 bits, offering approximately 15 decimal digits of precision. The term "precision" indicates the number of significant digits that can be accurately stored in the given format. In summary, single precision is a 32-bit representation with lower precision, while double precision is a 64-bit representation with higher precision for storing real numbers in computing.


Hope this helps.
Best,
Tsvetelin

Submit an answer