Writing Portable Graphics Code

[Index]

Introduction

Computer programs are difficult to design, write, debug and test. Having gone through these stages in creating a program, it would be helpful if we didn't have to repeat that effort when we wanted to move the program to another operating system, or when the computer hardware is upgraded, or when some other aspect of its environment changes.

This is known as portability - the ability to move a program to another environment and to have it work with no (or few) modifications. Ideally there would be no changes needed any time a computer program is moved to a new system, but in practice there are always some changes needed.

Portability is good for a number of reasons. Portable computer code means that the effort of rewriting the code for each new computer system is reduced. With care, code can be re-used without any modifications. The effort of testing a program is also reduced, because code which is known to be portable can be tested thoroughly on one computer system, and then moved to other systems without requiring the same amount of testing.

Portable code can also be good code in its own right. Thinking about portability means designing abstractly from the start. This forces the programmer to consider the assumptions upon which a design is based, and to encapsulate details which are specific to one platform, hiding them behind well defined interfaces. This can make a design more modular and easy to understand. Even if a program were only to be used on one computer and one operating system, the effort of writing portable code has larger rewards than portability alone.

When many users are involved, portability has rich benefits. Consider a teaching environment, such as a computer studies course, where teachers and students must share ideas and demonstrate practical skills through sharing computer code. The ability for students to take work home, tinker with code on a home machine, and then bring it back to share with the class is clearly desirable. Without portability, such work would be difficult because code which works on a home computer may not work anywhere else.

Consider an office environment where people are working on shared documents using computer programs and networks. The computers may have been purchased from different manufacturers at different times, and use different hardware. Some may use different operating systems, or different versions of the same operating system. Sharing data in such a diverse environment is clearly desirable, so programs which are designed correctly should be able to import information from any source, to allow sharing. Portable code is written with such situations in mind.

Consider the World Wide Web, where information can be browsed from any country, and should still appear correctly. This is an extreme case, where not only portability of programs, but internationalisation also plays an important role.

Graphical programs are a particular source of difficulty when it comes to portability, since graphics is generally not a standard part of most computer languages (Java is a notable exception). Indeed, software manufacturers have made their money by creating proprietary graphics and windowing systems, and jealously guarding their source code, advocating one operating system, or one hardware platform, over all others in an attempt to gain or preserve 'market share'. Standardising graphics interfaces would have so many benefits to users and programmers alike, but would not help these companies, so there are particular problems when it comes to writing portable graphical programs because there is little agreement between manufacturers.

Herein is collected a set of rules of thumb about how to achieve portability in graphical programs, and where the danger areas lie.

Computer Languages

The choice of computer language used for a program can have a large impact on its portability. Some languages are more portable than others; some are even designed with portability in mind from the start, such as Java.

Whichever language is chosen, it is useful to be aware of the danger areas within that language. The following discussion relates particularly to the C language, and its derivatives C++ and Java, but these rules often apply to other languages too.

Language Standards

A language standard is the best assurance that your program will work in the future, and in different environments to the one you are currently using. Knowing the standard is like knowing the road rules - it helps you choose the right way to solve a problem, and to know what will be true in other situations, as opposed to what is just a local custom.

For example, many C and C++ compilers add keywords to the language. Luckily, many also allow the option of switching off these additions, and for good reason: they can render a program non-portable. A case in point is the word 'FAR' on Windows platforms. It is used to refer to a 32-bit pointer to data, but it is not a keyword in the C language standard. Programs which use this keyword will have problems when moved to another platform, such as Linux or Macintosh, where that word is neither used, needed nor defined.

Knowing the standard makes you aware of what you can rely upon, and what you can't. Programming within the common, well used parts of the standard is the safest approach.

Here are some danger areas within the C language standard:

Sizes of data types

The sizes of an integer or a floating point number are not guaranteed. An int will be able to store numbers in the range -32768 to +32767, but it may be able to store a greater range. A floating point number might occupy 64 bits in memory, but only use 56 bits in practice. Any code which relies on these data types working the same way on different platforms may have problems when ported elsewhere.

Consider this definition of a two dimensional point, as might be used within a graphical program:

	struct Point {
		int x;
		int y;
	}

This definition uses two ints, so we can only be certain of storing the above given range of numbers (equal to 16 bits of information). Fortunately, this is likely to be sufficient for the foreseeable future, at least for computer screens. Computer displays currently extend up to 2000 pixels in each of the x and y directions, which is much smaller than the 32767 maximum. The real problem area is in printing to paper. Postscript printers often allow 600 or 1200 dots per inch, which can mean that the longer edge of a printed page may include 14000 dots or more, which is approaching the limits of resolution that an int can guarantee.

C and C++ both suffer from this vagueness about the sizes of basic data types. The designers of Java were aware of this problem, and specified that an int contains exactly 32 bits of data, allowing more than 4,000,000,000 possible values which is more than enough for most applications. While many modern C and C++ compilers use this same size for an int, this fact cannot be relied upon, whereas in Java it is part of the specification, and so it is guaranteed.

Byte order

Code which relies on data being organised in certain bytes in memory is usually bad code. Consider this:

	struct {
		int   vertex;
		float length;
	} node;

	node.vertex = 7;
	node.length = 1.618

	fwrite(&node, sizeof(node), 1, f);

This code defines a data structure to store information about a node in a graph, assigns some data into it, and then attempts to write that information to a file. Unfortunately, the code is buggy. The size of the int variable is not the same for all platforms, and neither is the float, but even if the sizes were the same, moving the data file to another computer may cause it to be interpreted differently. This is because the order of bytes in memory for one variable is not guaranteed. Results will differ depending on the hardware. The deceptive thing about this code is that it will work if you only use one computer; it is moving to a different computer than will reveal the deficiencies.

Byte placement

Another problem with the above code is that the spaces between elements of a structure is not guaranteed. The bytes which make up the float may occur immediately after the int, or there may be a 'hole' between the two, where no data is stored. Structures in C (and classes in C++) define the order of variables in memory, but do not guarantee the actual locations - it is up to the compiler writer to decide that detail, using a knowledge of the hardware. For this reason, relying on byte placements is also to be avoided at all costs.

Java avoids the problems of byte order and placement by implementing a Serializable interface which guarantees the way that data is stored whenever it is saved to a file, or written to the network. How things are organised in memory is irrelevant since there are no arbitrary pointers to memory in Java.

In computer graphics programs written in C or C++, the order of data in memory is often important, because many programs write bytes directly to memory for the sake of speed. A program may be aware that video display memory is organised as an array of bytes, such as:

	R G B R G B R G B ...

Where 'R' stands for red, 'G' for green, and 'B' for blue. Writing a byte to an 'R' memory location changes the red component of the pixel on the screen.

Programs might thus include code such as this fragment:

	while (length > 0) {
		*screen++ = (*color).red;
		*screen++ = (*color).green;
		*screen++ = (*color).blue;
		color++;
		length--;
	}

This code is difficult to understand because it involves pointer arithmetic, and because the organisation of memory in the graphics hardware itself is being exposed to the program. To modify this code to work on a different platform requires handling many different cases. What happens if the bytes are organised in the reverse order?

	B G R B G R B G R ...

Extra code is needed to support the two different organisations of display memory. In fact, there are many more than these two cases to handle. There are more like twenty possibilities, to handle every display from black and white terminals, to 32 bit true colour screens.

For a graphical program to be truly portable it must hide these hardware-specific details behind a simple interface, while not sacrificing too much speed. Graphical programs require so much data to be sent to the screen that small inefficiencies can render a program unusable. Therefore, a portable, fast interface to graphics hardware is needed.

Unfortunately, many existing graphics interfaces expose too much of the hardware details. The X-Windows system, for instance, requires the programmer to ask for the byte order and bit order within each byte, to write pixels to memory quickly (ordinary drawing operations don't require this knowledge; only fast creation of arrays of pixels needs this detailed information).

This means that each X-Windows program which needs fast access to pixel memory must support every possible hardware configuration. A better solution would have been to require the programmer to write pixels into memory in one standard manner, and then each implementation of X-Windows reorganises the data internally, given a knowledge of the hardware. This would shift the hardware-specific and non-portable parts of the design behind a simpler interface. Incidentally, this approach is used by both the MS-Windows GDI interface, and the GraphApp portability toolkit.

Signed types

In C and C++, whether the char data type is signed or unsigned is unspecified, and thus may vary depending on the compiler. A char is an 8 bit number.

Since video memory is often organised into bytes which represent pixels on screen, the char type is frequently used to represent a pixel value. Unfortunately, a char may be signed or unsigned, and thus may either represent the signed range of numbers -128 to +127, or the unsigned range 0 to +255.

This fact becomes important when manipulating pixel values numerically, as in the following example:

	char red = 255;
	char green = 128;
	char blue = 5;
	long pixel_value;

	pixel_value  = ((long) red << 16);
	pixel_value |= ((long) green << 8);
	pixel_value |= (long) blue);

Here, a 32 bit pixel value is being constructed from three component colour values, by first left-shifting the bits of each component to the correct spots, then using bitwise-or to combine the bits into one value. The cast to long is required because otherwise the data may stay as an 8 bit char during the left-shift, which would make the bits 'fall off the left edge', producing zero.

This code fragment would work correctly if the char type is unsigned, but will not work if it is signed. If the values are signed, any number above +127 will actually become a negative number. So 255 would actually represent -1, and 128 would actually be -128, while 5 remains as +5. This is due to the way computers use two's complement to store negative numbers - the top-most bit is set to 1 whenever a number is negative.

Casting these numbers to a long integer type will sign extend the number. In effect, the top-most bit will be copied into all bits to the left of the original 8 bits, to ensure the number is still negative. This has a nasty side-effect in this line:

	pixel_value |= ((long) green << 8);

The green value is now negative, and will overwrite the data from the previous line, in which the red value was stored.

Again, this is a portability problem which may only occur on some compilers, making it difficult to spot. It occurs because parts of the C and C++ language standards are vague. It doesn't occur in Java because the signedness of the basic types is fixed in that language (all numbers are signed).

Right shift

In the code above we saw the bitwise left-shift operation used to combine colour components into a single pixel value. To retrieve those components we might expect to use the opposite operation, the right-shift:

	long pixel_value;
	int red, green, blue;

	...
	red   = (pixel_value & 0x00FF0000L) >> 16;
	green = (pixel_value & 0x0000FF00L) >> 8;
	blue  = (pixel_value & 0x000000FFL);

Parts of the 32 bit pixel value are masked off, using the bitwise-and operation. They are then right-shifted into the lowest bits.

This code is totally wrong, but only experienced programmers might realise why. The right-shift operator is different to the left-shift, in that it might or might not sign extend the number. Left-shift is guaranteed to fill the rightmost bits with zero; right-shift is not guaranteed to fill the leftmost bits with zero. The compiler writer might choose to implement right-shift such that negative numbers remain negative. In that case, if the top-most bit equals 1, that bit will be copied into each new bit that is shifted into the number.

The correct way to implement this operation is to perform the mask operation after the right-shift:

	red   = (pixel_value >> 16) & 0x00FF;
	green = (pixel_value >> 8)  & 0x00FF;
	blue  = pixel_value & 0x00FF;

This would ensure the colour component values are all positive. Java avoids right-shift problems by defining two different right-shift operators. The 'arithmetical right shift' operator >> performs a signed right-shift, while the new 'logical right shift' operator >>> performs an unsigned right-shift.

An even better solution is to avoid shifting and masking altogether, by using an appropriate higher-level approach. Such a technique is taken in GraphApp amongst other systems. For example, GraphApp defines a Colour type:

	struct Colour {
		unsigned char alpha;
		unsigned char red;
		unsigned char green;
		unsigned char blue;
	}

The alpha component is not a colour, it is used to signal transparency; let us ignore it for now.

Two things are notable about this definition. Firstly, signedness of the chars is not left up to the compiler; it is forced to use unsigned numbers, thus avoiding many of the problems associated with sign extension. Secondly, the use of a structure means we can write code such as:

	Colour c;

	c.red = 255;
	c.green = 128;
	c.blue = 0;

This is much easier to understand than the previous masking and shifting examples, and much more portable. This code should work on any C compiler and platform.

With care, C and C++ can be as portable as Java programs. It simply requires knowing how to avoid the danger areas in each language.

Java problems

The above discussion may seem to be advocating Java over C or C++ as a better language for graphics applications, and certainly there are many benefits in that choice, but no language is completely without fault.

The decision by the designers of Java to make all numbers signed means that operations which are best done using unsigned numbers must be written differently.

Using the GraphApp definition of Colour, it is possible to compare two colours to discover which has a brighter red component:

	if (c1.red > c2.red)
		...

Written in Java, we could define Colour as a class:

	class Colour {
		byte alpha;
		byte red;
		byte green;
		byte blue;
	}

The Java byte type is the equivalent of a signed char in C. It is a signed 8 bit quantity which can express the range of numbers -128 to +127. The question becomes which number represents the brightest value that one of these colour components can attain?

The simplest answer is that +127 is the highest byte value and therefore also the brightest component value. Unfortunately, many graphics algorithms assume zero means black, and that +255 is the brightest value, so we are either forced to subtract 128 when placing values into the class, or else map the values in the range +128 to +255 onto the range -128 to -1.

Both solutions are, predictably, complicated. In the latter situation comparing two red value could involve code such as:

	if ((c1.red < 0 && c2.red >= 0) || (c1.red > c2.red))
		...

This is non-intuitive and difficult to write correctly. It's an example of how restrictions in a language can impinge on the expressiveness of that language, and sometimes reduce its ability to solve problems.

Standard Libraries

Languages are not the only place that portability problems arise. Another common problem is relying on non-standard additions to the standard libraries of functions which come supplied with a compiler.

For instance, the strdup function in C is a commonly used function to duplicate a string, yet the C language standard of 1989 does not mention this function. Many compiler manufacturers include it with their compilers, but some stick to the standard and do not, causing a problem for code which uses it. Hence, it is a good idea to avoid using this function, and instead write your own portable function which does the same thing, but has a different name to avoid conflicting with the compiler's definition.

In the case of graphics programming, there are a great variety of graphics libraries included with compilers, but none of them are standard. Using any library supplied by one compiler manufacturer will almost guarantee that a program will work on that platform but on no other. There are graphics libraries which try to behave the same on many different platforms, and work with many different compilers, such as Tcl/Tk, and Java's AWT. These are probably a better approach to achieving portable graphics than any compiler-specific toolkit.

GraphApp goes further than these libraries in guaranteeing portability for graphics programs. GraphApp is able to guarantee that a graphical program will look the same down to each pixel on a window. GraphApp also advocates supplying a portable Unicode font with a program, to facilitate internationalisation.

Program Organisation

The way a program is organised into source code and header files, and into directories, can affect its portability.

Inclusion of files

Including files from the same directory as a source code file is simple to achieve:

	#include "header.h"

The quotes around the file name generally cause the compiler to search for the file within the same directory as the source code file. Some compilers interpret quotes slightly differently, instead searching for the file within the compiler current working directory which might not be the same thing. Suppose we were to compile a program thus:

	cc code/program.c

Here, the compiler's working directory is different to the directory where the program source code file is located. This could cause problems.

The C standard of 1989 says that quotes cause a search for the file, first "in association with the original source file" (a deliberately implementation-dependent phrase), and if that search fails, then as if the quotes were replaced by angled brackets < > so that

	#include "stdio.h"

could have the same effect as

	#include <stdio.h>

Few compilers honour this part of the C standard.

Absolute and relative path names should also be avoided within such inclusions. The standard says that the file name cannot include ", ', \, or /* or else the result is undefined. In practice, MS-Windows compilers often disallow the / character but do allow the \ character as a directory separator. Clearly this would not work on a Linux platform. Therefore, including directory names within such statements is inherently non-portable.

Conditional compilation

Conditional compilation using the C preprocessor's #ifdef directive is fraught with problems. There are almost no cases where it is needed.

The problem with conditional compilation is that parts of the program are never seen by the compiler, and thus may contain bugs which will only become apparent when the program is ported to another environment.

Consider the following example:

	#ifdef WIN32
	  #ifndef _WIN32_WCE
	    #define debug printf
	  #else
	    int debug(char *fmt, ...)
	#endif

Here, the code contains a bug if the program is compiled with both WIN32 and _WIN32_WCE defined. The function prototype on the second last line is missing a terminating semi-colon. This error will only be found when the compiling conditions cause that line to be included.

In other words, there is a bug in the code, but the compiler is prevented from seeing all the code, so the bug remains. Compilers are very good at picking up pedantic little problems such as missing semi-colons - so why not let them do the job they're good at?

A better solution would be to avoid using the same name debug as both a definition and a function name, so that the function prototype can be moved outside the conditional block, and hence the compiler can always see and check that code.

The complexity of nested conditionals also causes problems. Here is an example adapted from a popular graphics library:

	#ifdef __BORLANDC__
	#  if defined(__LARGE__) || defined(__HUGE__)
	#    define LDATA 1
	#  elif defined(__COMPACT__)
	#    define LDATA 1
	#  else
	#    define LDATA 0
	#  endif
	#  if !defined(__WIN32__) && !defined(__FLAT__)
	#    define MAX_MALLOC_64K
	#    if (LDATA != 1)
	#      ifndef FAR
	#        define FAR __far
	#      endif
	#      define USE_FAR_KEYWORD
	#    endif   /* LDATA != 1 */
	#  endif  /* __WIN32__, __FLAT__ */
	#endif   /* __BORLANDC__ */

What does this code do? The first problem is the complexity of these conditionals. We cannot be sure what is happening here without very careful reading.

The second problem is that the above code is attempting to compensate for variations between different compilers. That's the wrong approach. The right approach is to stick to the C standard, and assume the compiler does too. This way, the program and the compiler are speaking the same language. Otherwise, it is akin to speaking a pidgin form of C.

Here's another example of the same problem from the same header file:

	#ifdef BSD
	#  include <strings.h>
	#else
	#  include <string.h>
	#endif

Including this header file in a program causes another header file to be included. Which other header file is included depends on whether the symbol BSD is defined. What happens if my program defines that symbol first? What happens if a new compiler defines that symbol to mean something quite different? The C standard says there is a header file called string.h, not one called strings.h. BSD systems are therefore wrong. Trying to correct for this kind of mistake introduces two new mistakes into the code: assuming BSD has one and only one meaning on all platforms; and including a header file from another header file (often a bad idea).

Including header files from within other header files is often a poor programming practice because it means that symbols might be defined twice, or that some symbols might be defined whether we wanted them to be, or not.

A real example: Suppose we were using a variable called sun within an astronomical program. This might work on many compilers and platforms, but there may be a compiler where #include <stdio.h> causes the inclusion of another file in which sun is defined as the version number of the compiler. Our variable would now become a constant number. The insidious thing about this is that we didn't ask for any other header file to be included - the compiler just did it.

This problem plagues graphical programs which attempt to define types such as Window, Bitmap and so on, since some of these names are defined already on various platforms. Different names are used on each different platform, so finding one consistent set of names which works everywhere is hard.

There is a way of working around this problem, by redefining the problematic symbols:

	#define  Window   X_Window
	#include <X11/Xlib.h>
	#undef   Window

This is an extreme solution, but sometimes necessary for portability. Sometimes even this solution doesn't work, because some compilers use 'pre-compiled headers' which means that to save speed the compiler includes the standard header files in some pre-digested binary format. Textual substitution such as is used above will not work in these cases.

Another problem with conditional compilation is that it doubles the number of test cases needed to test a program. Every #ifdef statement requires two test cases to check what happens when the branch is taken, or not. How would we test this following piece of code?

	#ifdef DEBUG
	  debug = 1;
	#endif
	#ifdef EXPAND_TABS
	  tabs--;
	#endif
	#ifdef UNICODE
	  utf8_to_c(line);
	#endif
	#ifdef STDIO_SUPPORT
	  printf("Processing line %d\n", linenum);
	#endif

This code, despite also being hard to read, requires not four test cases to verify its correctness, but sixteen! Each conditional might or might not occur, so there are two possibilities after the first conditional (either debug is, or is not, set to one), four possibilities after the second conditional (debug is or is not changed, tabs is or is not decremented), and so on.

Much better than the conditional compilation approach is to move all system-dependent code into a separate set of files. This approach is taken in GraphApp, where the X-Windows specific code is kept in a separate directory to the MS-Windows specific code. It makes code much easier to read and therefore get right in the first place. The same approach can be used for compiler-specific code, if such code is really necessary.

If fine-grained conditionals are needed, such as for debugging, the humble if statement can do the same work as the #if, while also allowing the compiler to check the affected code:

	const int debug = 0;
	...
	if (debug) {
		...
	}

In the above example, debugging code is kept inside an if statement. Modern compilers will be able to recognise that the debug flag is a constant, which is currently set to zero, and hence the entire if statement block can be safely removed during compilation. The major benefit of this approach is that the code within the statement block will still be parsed by the compiler, so that errors within that block will be found and reported.

Sometimes conditional compilation is used to deliberately conceal an error, as in this common construction:

	#ifndef MY_HEADER
	#define MY_HEADER
	...
	#endif

Here, a header file keeps all of its definitions within a conditional. If the symbol MY_HEADER is not yet defined, it is defined, followed by the other things inside the header file. Including this header file a second time finds the symbol MY_HEADER already defined, so the compiler skips the contents of the file without issuing any warning.

This construction is often used to allow a header file to be included twice without causing an error. Yet it should cause an error! Errors are good things, because they alert a programmer to problems within the code. Concealing the error is a way of covering up the poor practice of header files including other header files.

A better solution is to simply ask the programmer to include header files in the correct order in the first place, thus producing higher quality code, rather than allowing the programmer to speak pidgin C, and then trying to compensate for the programmer's mistakes by letting them slip through.

For these reasons, conditional compilation should be avoided as much as possible. Separate sets of files for different platforms or compilers are a much better approach to managing the complexity of C and C++ programs. The JPEG library uses 143 conditional compilation statements; the PNG library uses 632; GraphApp code uses five.

The designers of Java wisely avoided these difficulties by disallowing the use of a preprocessor within that language, and by using proper scope rules to ensure each symbol exists within an appropriate name space, thus avoiding name conflicts and compiler dependencies.

File Access

The way files are accessed on different operating systems is a source of portability problems. There are a few main problems: directory separators; text files; file name limitations; file system conventions; and file structure.

Directory separators

Accessing files on several platforms using C is simple if the entire path name is known, it is simply a call to the standard fopen function:

	FILE *f = fopen(pathname, "r");

The 1989 C standard defines this function as part of the standard library, but is vague about directory separators.

A Linux program uses the forward slash / to separate one directory name from the rest of the path, as in

	/usr/share/fonts/times

The same path name on a MS-Windows machine used the backslash \ character to do the same job. A Macintosh uses the colon : instead.

These differences mean that even simple C and C++ programs can be rendered non-portable if they access files in other directories.

A simple solution to this problem is proposed in GraphApp: replace fopen by another function which only accepts Linux-style path names. Internally the path name is converted to whatever the platform requires, but from the programmer's point of view all file systems look like Linux.

File system conventions

Even if the above approach is used, there are still difficulties. MS-Windows machines don't have the same directory structure conventions as Linux machines, so a file might have to reside in a quite different location. For example,

	/usr/share/fonts/times

might be located at

	\My Document\Fonts\times

Using a search path within the fopen replacement function may help, but introduces uncertainty. Relative path names can help the situation considerably, because they can often be correct when an absolute path name is not. There is no simple solution to this problem.

Text files

Text files on Linux machines use the line feed or newline character '\n' to mark the end of a line. This is sometimes abbreviated as LF.

On MS-Windows machines, two characters are instead used: the carriage return, '\r' or CR, followed by a line feed LF. On Macintoshes, a single carriage return character is used to mark the end of a line.

These differences can cause portability problems for programs which must access text files. The C library attempts to conceal these differences by allowing file access using a special text mode:

	FILE *f1 = fopen(filename1, "r");
	FILE *f2 = fopen(filename2, "rb");

The first function call opens a file in text mode, the second in binary mode. Text mode means when reading a file on an MS-Windows machine, CR-LF will be converted to a single LF. When writing to that file, the reverse occurs.

This solves some portability problems, but not if the data file itself needs to be portable. In that case, it would be necessary to convert the file whenever it moves to another platform. Otherwise a CR-LF file from an MS-Windows machine could find its way onto a Linux machine, where the CR will not automatically be stripped - introducing buggy data into the program.

The simplest way around these problems is to open all data files in binary mode, and then treat all combinations of CR-LF, LF or CR as meaning a single LF character. This code will then work on any text file from any system.

File name limitations

On old MS-Windows systems and even on modern floppy disks, file names are restricted to having eight letters in the name, followed by a dot, the by up to three other letters. This has been termed the 8.3 limit, and is a severe restriction. Not only that, but file names are not case-sensitive.

Modern MS-Windows operating systems allow for longer file names, but only on hard drives and CD-ROMs, so the file names may be distorted or truncated when transferring files using floppy disks.

Macintoshes allow case-sensitive file names, of up to 32 characters in length.

Linux machines and most Unixes allow file names to have up to 256 characters, upper and lower case. Linux even allows file names to be written in UTF-8 encoding, allowing for internationalisation. The only characters which are not allowed in a Linux file name are the directory separator '/' and the null character '\0'.

The simplest approach to providing portability across all these systems is to stick to the 8.3 limit, and avoid having file names which differ only by the case of letters. Keeping all file names as lower case works well.

File structure

On most operating systems, a file is thought of as a linear stream of bytes. When the file is opened, the file cursor is located at the start of the file, and reading bytes from it moves the cursor towards the end.

On Macintosh systems this concept of a file is only half the story. Files consist of two parts, called 'forks'. These two parts are known as the 'data fork' and the 'resource fork'. The data fork is essentially the same as the traditional C file. Calling fopen on a Macintosh file will give the program access to the data fork only.

The other part, the resource fork, is a data structure, not a stream of bytes. It contains named resources such as icons, menu definitions, compiled code, and so on. Because these things can only be accessed using functions which are outside the C standard library, this entire aspect of Macintosh file structure cannot be used if a program is to be truly portable.

MS-Windows executable programs have something similar, but the resources are located within a normal file, so ordinary programs which copy a file will work on such a file. They will not work on a Macintosh program however, since they would only copy the data fork.

Care must thus be taken when designing portable programs for these platforms. Relying on a resource system is a sure way of making code work on one and only one operating system.

Graphical files

These file system differences can affect any program, not just a graphical one. There are special problems which affect graphical programs, however.

A major concern is file formats for storing graphics data. The GIF format, for instance, stores 16 bit integers in PC format, so on all platforms it is safest to read the file one byte at a time, and reconstruct the integers using left shifting. Reading a whole 16 bits directly into an int variable is inherently non-portable and is the wrong way to do things.

Other graphics formats like TIFF do not define the order that bytes within an integer are to be stored to the file, and are therefore of even less use than GIF.

The JPEG and PNG formats are designed to be portable, and are therefore better formats to use. They contain raster data, which are arrays of pixels. The PNG format is especially well-designed, and implementations can check if the file has been accidentally converted into a text file (LF changed to CR-LF or vice-versa).

There is no one vector graphics data format designed to work on all platforms.

Fonts

A font is a typeface as used in a graphical program. Each operating system tends to include fonts from a variety of vendors, and different platforms thus can have quite different sets of fonts available.

Relying on any one font being present is a sure way to make a program non-portable. For example, it is possible to download a PDF file from the web which deals with mathematics, but for it to be completely unreadable because the mathematical symbols used within the document are actually just ASCII characters drawn in a font which is not present on your platform.

Fonts used in this way (to re-map ASCII characters to different symbols) are a poor use of fonts. Each symbol should have a unique number to identify it, rather than re-using existing numbers such as the ASCII character set. Re-using numbers breaks down if the required font doesn't exist, because the symbols may appear as ordinary English letters instead. ("Code pages" are another name for this poor practice.)

Unicode is a better approach to that problem, because it gives each symbol a unique number. The symbols, or glyphs, are defined to look the same regardless of font, so that even if the precise font isn't available, a close approximation can be found.

If a special font is required by an application, it should be possible to include the font with the application so that wherever the application is running it has access to the glyphs needed.

Because fonts may differ slightly between platforms it may not be possible to rely on an application looking the same on all platforms at the pixel level. For example, Java Applets need to use a geometry manager to ensure the placements of elements of the user interface are appropriate. The programmer specifies logical positioning rather than absolute locations for the user interface elements.

GraphApp takes a different approach, supplying a portable Unicode font with its code, so that applications can have the same font available everywhere. It by-passes the ordinary font rendering technology on each platform, implementing its own instead, which guarantees the same results on all platforms.

The new X-Windows extension Render uses a similar approach to font rendering. As applications demand more control over font rendering aspects such as scalability, anti-aliasing and rotation, so too does the usefulness of traditional font drawing methods decrease. The X Render extension shifts the responsibility for locating fonts and rendering glyphs to the client side, and simply allows a fast way for a client application to send the glyphs to the graphics server and then render text using those glyphs.

The Qt toolkit uses the Render extension (where installed) to draw using fonts. GraphApp already implements a similar technique, but using portable code which also works on non-X platforms.

Using a portability toolkit to handle the rendering of fonts is thus a good way of isolating a program from these changes, and getting more control over fonts.

Internationalisation

The use of Unicode is a good idea in portable applications, because if the program is successful, it will likely be used in situations which the original designers had not anticipated. One obvious situation is using a different language.

Not all computers use ASCII. Not all computer users speak English. Portable programs should be able to adapt to changes in language as much as possible, within reason. Each program's approach to internationalisation will differ, but some technologies help to make transitions easier.

Unicode is a fundamental technology for storing text from any language. Unlike code pages, Unicode is not limited to only storing one or two languages per document. Glyphs from most languages on Earth are present in the Unicode set.

The most popular format for storing Unicode text is called UTF-8. All ASCII text is correctly formatted UTF-8 text, so it has some backwards compatibility, but characters above ASCII value 127 (accented European letters for instance) need two or more bytes to store the value.

Many existing portability packages support Unicode. Java for example, as well as the Inferno portable operating system, and GraphApp, use Unicode and UTF-8 encoded text.

Unicode is, of course, not the only issue with internationalisation. The topic is too broad to deal with in any depth here. Let us consider at least the issue of drawing text.

Many languages draw glyphs from left to right down the page. English and most European languages fall into that category. Some languages are written right to left, such as Hebrew and Arabic. Some can be written in either direction, such as Chinese, or even down the page.

A program which must be portable to other languages should be able to cope with these changes. If using a portability package, the ability to render fonts in any direction is clearly needed. If the system looks at the 'locale' and decides which way to draw text depending on how those settings, that is not enough; it should be possible to control the layout of any line of text individually. Otherwise it may not be possible to mix right to left text with left to right within the one document. Logically English should always be written left to right, even on a computer which has a 'locale' which says to draw letters from right to left.

The Macintosh operating system has had foreign language support for many years and does a fairly good job of things. Displaying a document which contains a mixture of English and Hebrew is handled by drawing all text aligned to the right of the page, but the English is correctly written left to right, while the Hebrew is correctly written right to left. Bi-lingual readers can thus read both with few problems.

Of course, the Macintosh OS is not portable and so relying on any one operating system's foreign language support features is likely to make your program non-portable. Each application will have different needs. A word processor is likely to require a lot more effort to port than a CAD program. Indeed, word processors which can handle any language are unlikely to exist for quite some time, because every language is so unique. Defining the glyphs of a language is only a small (but important) part of defining the way a language is used.

Summary

The document has explored some of the areas where graphical programs in particular suffer from poor assumptions and poor portability. There are many more issues to be addressed, such as look-and-feel and drag-and-drop.

Portability toolkits exist which can make programs easier to port. Each has its strengths and weaknesses. A toolkit on its own won't make a program completely portable. It is possible to write non-portable code in Java, and truly portable code in C, or vice-versa. But toolkits can help by placing system-dependent components behind a consistent interface.

This technique can also be used when structuring a program. A well designed program defines good interfaces between components. Identifying the parts of a design which are inherently non-portable and moving them into separate files is a better technique than scattering them through source code files using conditional compilation, for example.

Standards can help with the interchange of information. Graphics file formats such as PNG, and text standards such as Unicode, can help a program achieve a much higher level of portability than custom-designed solutions.

Paying attention to details such as file name limitations, and end-of-line markers within a textual document can have rich rewards.

Portability is not just a matter of 'fixing up' source code when it is moved to a new environment. At best, portability is a philosophy of writing good code which makes few assumptions, and clearly isolates those it does make behind well-defined interfaces.

(C) 2001 GraphApp Documentation Team