What is meant by single and double precision for float-point type?
I didn't understand the difference between single and double precision. Please explain what is meant by this and how are they different from each other with example.
2 answers ( 0 marked as helpful)
Hi Shandana!
Thanks for reaching out.
The two terms are used in sense of the architecture of the machine you are using, if it's a 32 bit, then it will use single precision when representing the floating point numbers in the memory and if it's 64 bit, the double precision will be used.
Hope this helps.
Best,
Tsvetelin
Single and Double are used to refer to data stored in memory. In the Single form, data can take up to 23 digits whereas in the double it takes about 53 digits.