The benefits of writing software in Cthe gains in programmer productivity and software reliabilityhave been applied to a wide range of embedded systems, from simple 8-bit CPUs to powerful 32-bit units. Now, with many diskettes full of good C source code, why not reuse some of it? But how can you run 32-bit C source code on an 8-bit CPU, or 8-bit C source code on a 32-bit CPU? The answer lies in designing your code with portability in mind from the beginning.
One common portability problem has to do with the size of C variables. On most 8-bit and 16-bit CPUs, compilers treat
int variables as 16-bit values. But on most 32-bit CPUs, compilers treat
int variables as 32-bit values. If any of your software algorithms depend on the size of an
int being 16 bits, your software isn't going to run properly on a 32-bit CPU.
For example, you might take advantage of the fact that a 16-bit counter rolls over to 0 after 65,536 increments, but on a 32-bit CPU, this isn't true. One way around this is to separate variables into three categories: those that require more than 16 bits, those that require exactly 16 bits, and those that require 16 bits or less.
For variables that require 16 bits or less, just declare them as
int variables. This lets the compiler choose the most efficient size (16 bits on some CPUs, 32 bits on others). Use these variables for small counters, data values, etc., that won't run into the int limit on the smallest CPUs (almost always 16 bits).
For variables that require exactly 16 bits, you need to create a
typedef feature lets you invent a new name for a data type, in addition to the usual
long, etc. On a 16-bit CPU, you would say
typedef int signed16;, while on a 32-bit CPU you would say
typedef short signed16;. In both cases, you have defined a new name for a "signed 16-bit data value." So you can now write your source code and declare variables to be
signed16 (instead of
int), and you'll get a "signed 16-bit data value" on any CPU (8, 16, or 32 bits).
Likewise, for variables that require more than 16 bits, you can say
typedef long signed32; on a 16-bit CPU, and
typedef int signed32; on a 32-bit CPU. In both cases, you have defined a new name for a "signed 32-bit data value" called
signed32 instead of
You can create other
unsigned16 for "unsigned 16-bit data value" and
unsigned32 for "unsigned 32-bit data value." Put all these
typedefs in an "include" file and
#include it in your source files. When you invoke your compiler, you can pass in a special symbol to select the proper variable size (most compilers let you define a symbol on the command line).
Here are some examples:
/* TYPES.H */ #ifdef SMALL_CPU typedef int signed16; /* signed 16-bit value */ typedef long signed32; /* signed 32-bit value */ typedef unsigned int unsigned16; /* unsigned 16-bit value */ typedef unsigned long unsigned32; /* unsigned 32-bit value */ #else typedef short signed16; /* signed 16-bit value */ typedef int signed32; /* signed 32-bit value */ typedef unsigned short unsigned16; /* unsigned 16-bit value */ typedef unsigned int unsigned32; /* unsigned 32-bit value */ #endif
In your source file, you would write your code like this...
/* SOURCE.C */ #include "types.h" int my_var; /* generic int variable, size can vary */ signed16 my_var_16; /* this variable is always 16 bits */ signed32 my_var_32; /* this variable is always 32 bits */
...and you would invoke your 16-bit compiler like this (for example):
cc -dSMALL_CPU source.c
|Computer Page||Home Page|