What is meant by single and double precision for float-point type?
I didn't understand the difference between single and double precision. Please explain what is meant by this and how are they different from each other with example.
Thanks for reaching out.
The two terms are used in sense of the architecture of the machine you are using, if it's a 32 bit, then it will use single precision when representing the floating point numbers in the memory and if it's 64 bit, the double precision will be used.
Hope this helps.