Why does ensuring a minimum bit-size matter?

I was reading through the notes and I'm learning about the different data-types and their size specifiers. I understand that these specifiers, as the notes say, just ensure that the type you're specifying (short/long) contains a minimum # of bits. I just don't understand why having a certain # of bits matters. Does It not have the same storage capability and type? What does doing this provide?
3 Replies
split
split3y ago
Yes it's about efficiency and storage mostly, your computer might have the memory capacity to store billions of bytes but as a developer you might be developing for other devices like routers, IoT devices and other low level hardware and you have to use their capabilities efficiently. There's also the CPU level optimizations that happen based on the data types. Once you get into embedded programming, all of this will make sense.
shmerg
shmergOP3y ago
Thank you so much!!! @split is it good to start size specifying every project i make as a habit in case i run into those issues in the future. Or are these rare situations in the field?
split
split3y ago
You shouldnt be worrying about that kind of memory optimization at this point. Just use the relevant data types to what you require out of them, single byte types to store upto 256 values, long for serial numbers, floats for decimals, doubles for high precision and longer decimals, etc.. and that should be more than enough.
Want results from more Discord servers?
Add your server