My understanding is that most of the problems 64-bit compiling is going to run into are going to be type definition differences. For example, an "int" is 32 bits wide in a 32-bit environment, but 64 bits wide in a 64-bit environment. Pointers are also going to be 64 bits wide instead of 32.
This means that any struct that contains an int is going to be incorrectly sized and misaligned in an environment it wasn't designed for.
The cleanest and easiest way to avoid screwing up your structs and such between 32-bit and 64-bit environments would be to change any references to non-static data types ("short", "int", "long", etc.) to static references.
I believe gcpp uses "int16_t", "int32_t", "int64_t", etc. for its static integer types, defined in <stdint.h> or something. Not really sure.
Visual C++ (the environment I'm accustomed to) uses __int16, __int32, __int64 for its static types.
What I generally do in my own code is define a set of static typedefs for maximum portability:
typedef signed __int32 sint32;
typedef unsigned __int32 uint32;
typedef signed __int16 sint16;
typedef unsigned __int16 uint16;
typedef signed __int8 sint8;
typedef unsigned __int8 uint8;
etc.
Of course, coming from an assembly background, I also tend to use BYTE, WORD, DWORD, and QWORD for the unsigned integer types.
The gist of it is that these variables will retain their original size no matter what environment they're in, 32-bit, 64-bit, 128-bit, whatever.
It may be a simple matter of using #ifdef directives to set the appropriate typedefs for each C++ compiler environment (gcpp, Visual Studio, etc.)...
Code:
#ifdef (a gcpp-only define)
typedef int32_t sint32;
typedef uint32_t uint32;
#endif
#ifdef (a vcpp-only define, __CLR_VER maybe?)
typedef signed __int32 sint32;
typedef unsigned __int32 uint32;
#endif
Then doing a search-and-replace to change all references to the non-static types into the static types.
Just my 2 coppers.
- Shendare