Last answered:

10 Apr 2023

Posted on:

07 Apr 2023


What is meant by single and double precision for float-point type?

I didn't understand the difference between single and double precision. Please explain what is meant by this and how are they different from each other with example. 

1 answers ( 0 marked as helpful)
Posted on:

10 Apr 2023


Hi Shandana!
Thanks for reaching out.

The two terms are used in sense of the architecture of the machine you are using, if it's a 32 bit, then it will use single precision when representing the floating point numbers in the memory and if it's 64 bit, the double precision will be used.

Hope this helps.

Submit an answer