Last answered:

10 Apr 2023

Posted on:

07 Apr 2023

0

What is meant by single and double precision for float-point type?

I didn't understand the difference between single and double precision. Please explain what is meant by this and how are they different from each other with example. 

1 answers ( 0 marked as helpful)
Posted on:

10 Apr 2023

1

Hi Shandana!
Thanks for reaching out.


The two terms are used in sense of the architecture of the machine you are using, if it's a 32 bit, then it will use single precision when representing the floating point numbers in the memory and if it's 64 bit, the double precision will be used.


Hope this helps.
Best,
Tsvetelin

Submit an answer