[LCP]struct & union

Greg Black gjb at gbch.net
Mon Feb 4 18:29:51 UTC 2002


Paul Gearon wrote:

| On Mon, 4 Feb 2002, Greg Black wrote:
| 
| > Paul Gearon wrote:
| >
| > | The size of a float is part of the standard.
| >
| > Only in part.
| 
| True, but I pointed out that 4 bytes is considered "adequate" and was not
| mandatory.

The minimum sizes of object types are set by the standard.  That
part /is/ mandatory.  It is also mandatory for implementations
to define the details of sizes in <limits.h> and <float.h>.

| > | Someone here may have a more accurate picture of what size an int is in a
| > | given circumstance.  These days I just accept that it's 32 bits since
| > | everyone seems to have standardized on that size.
| >
| > Not at all.  People running 16-bit or smaller CPUs will almost
| > certainly be using 16-bit "int".  People with big registers may
| > well use 64-bit or 128-bit sizes for "int".  Of course, as so
| > many people discovered when 64-bit CPUs started to be used more
| > widely, lots of software is written by people who just don't
| > understand C's size rules and who write software that breaks in
| > the face of "unexpected" sizes of "int", "long", etc.
| 
| OK, this is where I should have been clearer.  On 16 bit processors you
| will certainly find that sizeof(int)==2, but I once used a compiler for an
| 8 bit processor where sizeof(int)==2.

Of course -- the minimum size for int is 16 bits and no
conforming implementation can use less than this.

| For PC/Workstation applications
| things are also a little strange because there seems to be a blind
| adherence to 32 bits.

If 32 bits is an efficient size, it's a good choice.

| For instance, M$ changed the size of an int to 32 bits without a CPU
| change.

I don't touch MS stuff, but I suspect that the change probably
accompanied a change of memory model and so would have made
sense.

| Also, I kept expecting to find 64 bit ints on 64 bit
| architectures, but I keep finding 32 bit ints instead, with names like
| "long long" being used for 64 bits.

Because too many programmers thought the world would always be a
VAX and couldn't be bothered writing correct software.  Compiler
vendors chose not to alienate their incompetent customers.

| It seems to me that many (not all)
| compiler writers have built a de facto standard on the 32 bit int without
| consulting if anyone really wanted it.

I think they have found that "everyone" did want it, because so
many people wrote code that depended on it.

| I guess it's a mistake to assume
| that it will *always* be 32 bit, but on a desktop machine I don't think
| you'll find anything else.

Perhaps, but it's foolish to depend on it.  After all, it's
trivial to use sensible typedefs for these things and to test
the sizes in software configuration tools before it's built on
each platform.

| So yeah, if I need to use bitmasks I'll use sizeof and the shift
| operators, but if someone asks how large an int is I'll say that it's
| normally 4 bytes.  :-)

Well, the right answer is that it's defined in <limits.h> and
can be tested at run time with "sizeof(int)".

Greg



More information about the linuxCprogramming mailing list