Appendix
Computer code
Here all the details of the computer algorithms used to generate Figures 12, 13, and 16 to 18 are shown. Initially, I thought of using symbolic mathematical notation to do this. However, ultimately I concluded that the computer code itself was a much less ambiguous, more direct and convenient alternative. Therefore, you will find below the code used to generate the figures.
The code has been written in Processing, an open source programming language and environment, based on Java, with special support for animations. At the time this book was written, Processing and its extensive documentation could be freely downloaded from http://www.processing.org. If you intend to run the programs below yourself, you will need to have Processing installed in your system. In what follows, I will assume that anyone interested in the level of detail in this appendix is relatively at ease with programming languages like Java.
The first program, shown below, is the one used to create a metaphor for the Source, as illustrated in Figures 12 and 13. It starts with some comments on how to use the program, as well as key variable declarations, which are also commented:
The next part of the code is the setup routine required by Processing. It defines the size of the window used to display the animation, the visual scheme used, and initializes some of the variables:
The next segment is the drawing function required by Processing, responsible for the main animation loop:
The function below is responsible for updating the states of the cellular automaton, carrying it over to the next generation:
To determine which of the twelve patterns illustrated in Figure 11 is present in a given neighborhood at a given iteration, it is convenient to, first, calculate the total number of live (black) cells in that neighborhood:
The function below can, then, actually identify which of those twelve patterns is at hand, using some computational shortcuts:
Here is some initialization code called early in the execution of the program. Notice that all cells are initialized to dead (white, or zero), while the center cell alone is initialized to alive (black, or one):
Finally, the code responsible for reading the users keyboard inputs to control the program:
The code for generating Figure 14 can be derived trivially from the program above. Therefore, it will not be discussed here.
Now, below, you will find the complete code used to generate Figures 16 and 17. Keep in mind that this is an entirely separate, standalone program. Although the code for emulating the Source is repeated in it since it is required for determining the Source projection plane I will comment only on the new segments of the code, responsible for emulating the plane of manifestation and displaying the state transition rules with different shades of grey. Here is the beginning of the code where, once again, the variable declarations are commented so you can have an early idea about which variable represents what:
The setup and drawing routines below are analogous to what we have discussed earlier, except that several new variables now need to be initialized, and then recursively updated within the main drawing loop:
Here, the three columns of Figures 16 and 17 are actually drawn on the screen, after the corresponding variables have been updated above:
The next loop controls the transition of states in the 1D cellular automaton, corresponding to the plane of manifestation, onto its next generation:
In the lines below, the vote is performed across the three cells in a neighborhood within the plane of manifestation, to determine the next state of a cell. The cells own vote counts twice those of its two neighbors in the plane of manifestation. Notice that, when a tie occurs, the next state is determined directly by the Source projection plane:
Now, the cells must learn from the states that actually manifest after the vote. They learn not only from direct experience, but also from experiences communicated from their two neighbors in the plane of manifestation. Notice that learning from direct experience is twice as strong as learning from what is communicated from neighbors:
Now the Source is updated to its next generation: