Artificial creativity
An algorithmic approach to art
by John Lansdown, Centre for Electronic Arts, Middlesex University

Introduction

Artists have long sought to improve their creativity by use of external mechanical or artificial means. We know of systems for regulating architectural proportion going back to the Ancient Greeks (Scholfield 1958); of the mediaeval use of bent nails thrown on the floor to suggest the rise and fall of melodic lines; of the dice music current at the time of Haydn and Mozart (O'Beirne 1967); of the Victorian harmonographs and kaleidoscopes to generate new visual patterns. From the 1930s through to the 1960s, Joseph Schillinger (1945, 1966), whose pupils included composers as diverse as Villa Lobos, Benny Goodman and Gershwin, showed how algorithmic and mathematical means could be used to develop musical and visual structures. In the 1950s, the architect Le Corbusier, developing the long-established tradition in the use by designers of regulatory systems based on the Golden Section, published his Modulor method to assist in building design (Le Corbusier 1954). It is clear, too, that abstract painters such as Malevich and Juan Gris used well-developed regulatory systems in their work (see Konig, 1992 for a particularly thorough analysis of Gris' method applied to a particular painting).

These - and many other - methods were devised to assist in the difficult business of generating and improving compositional ideas and not simply to help in the realisation of ideas already formed. There are, however, some fundamental differences between the methods. On the one hand, methods like nail-throwing and dice music, relied strongly on the element of chance. In the case of dice music, though, the aural effects of randomness were minimised by the nature of the musical phrases that were chosen by dice throwing. The phrases were composed so that they were related to one another in style, and devised to ensure that any one would fit in pleasant melodic sequence with any other. On the other hand, the algorithmic methods of Schillinger and others - CPE Bach, for instance (Lester 1992) - tended to be rule-based with minimum reliance on randomness. Systems of architectural proportion and other formal systems (such as the Orders) might be thought of as restrictions on rather than aids to creativity. This would be a mistaken view. Systems of proportion provided an accepted framework within which architects could develop their ideas by helping to ensure that buildings can have, as Alberti put it, ' . . . a unity of all the parts founded on a precise law and in such a way that nothing can be added, diminished, or altered but for the worse' (Borsi 1977 p240). We should also remember Orson Welles' insight, 'The enemy of art is the absence of limitations'.

With this background of long tradition in using mechanical and artificial means of assistance it is not surprising that, since computers first came on the scene forty-odd years ago, artists and others have tried to use them to augment their personal creativity. Alan Turing, who knew as much as anyone about the limitations of computational method and speaking before computers as we now know them existed, predicted that one day they would autonomously do creative things, like poetry writing. But as it happens, over the last thirty or so years the main artistic uses of computing have not been directed towards the compositional process - and certainly not towards the process in which the computer acts as an autonomous composer. This is, I think, a pity because it misses the most important attribute of the computer - something that makes it unique as an artefact. That is, its ability to make decisions according to rules.

The nature of creativity

For many years, it is in this area of autonomous production of artworks via the use of algorithmic methods that I have found the most challenging and rewarding of computer uses. My interest in modelling or simulating creativity has arisen out of a concern to explore this special feature of computing rather than any dissatisfaction with 'conventional' creative means. It is, for example, hard to see how computing might have helped, say, Beethoven or Mozart or, in a different context, Benjamin Franklin, to be more creative than they actually were. (Franklin, you will remember, invented among many other things, bifocal spectacles, crop insurance, daylight saving, the rocking chair, the glass harmonica and the Pennsylvania heating system. He was the first to show the relationship between lightning and electricity - devising the lightning conductor in the process. He helped draft the American Declaration of Independence, apparently using his gift for finding just the right words to suggest some of its most evocative phrases. He was an author, journalist, printer, publisher, diplomat and scientist whose work had surprisingly far-reaching influence - see Porter 1994 pp250-251 for a short assessment of his scientific impact. He was the epitome of the creative polymath).

Incidentally, some of the complexities of understanding (and, hence, modelling) creativity are pointed up when we compare Beethoven with Mozart. By common consent, both would be classed as extremely creative individuals. Yet the manifestations of their creativity were of very different sorts. Informally we can say that Beethoven's creativity extended as much to inventing new musical forms as to making wonderful examples of them. Mozart, on the other hand, tended to work almost exclusively within pre-existing formalisms. There is also some interesting recent evidence that, at least towards the end of his short life, he used a number-based approach to some aspects of his work - for example, structure and choice of key (Grattan-Guinness 1992). It is a moot point whether his numerological interest extended to using numbers in compositional method.

Modelling creativity

Human creativity, then, is a complex and poorly understood phenomenon and I do not see much hope of devising a general theoretical model to describe it. Even more remote is the possibility of a generalised computable model - that is, one that can be used to simulate creativity over a wide range of activities.

A general theoretical model would need to explain the sorts of differences in creativity outlined above - why Beethoven's creativity differed from Mozart's and why Franklin was so prolific and wide-ranging in his creativity. It would need to throw light, too, on the differences in speed of creative activity - why Rossini could take just 14 days to write the Barber of Seville and Irving Berlin just a weekend to write three of the best songs in Annie Get Your Gun , in contrast to the many years it took, say, Brahms to write his 2nd Symphony. It would need to explain the genius loci: the way in which, at certain times and not others, certain places become hotbeds of creative activity - Vienna from the mid 1700s to the end of the 1800s for music; Cambridge in the first half of the 1900s for physics; New York from the 1950s to the 1970s for art and so on.

Possibly, too, it would probably need to explain the role of personality defects so often identified in creative individuals. I do not mean by this, madness or similar neurotic problems. I mainly refer to the oddness of personality that seems to accompany high degrees of creativity. Anthony Storr (1972, Chapter 16) effectively disposes of the myth that madness and creativity go hand in hand. In the course of his arguments, he cites a 1904 Study of British Genius by Havelock Ellis who found that only about 4% of just over one thousand people he traced in the Dictionary of National Biography could be said to be demonstrably insane and this included those who, late in life, had developed senile disorders. Today, it is said that roughly the same percentage of the general public consulting GPs in Britain are suffering from psychological problems. I do not know whether Havelock Ellis included Newton in his study of British genius. On several occasions in his life from the age of about 35 onwards Newton displayed the symptoms of madness. A number of explanations have been proposed for these problems, the most frequent being related to Newton's genius. Klawans (1990, Chapter 3), however, intriguingly and convincingly argues that the most likely explanation for Newton's symptoms was mercury poisoning brought on by his experiments in alchemy!

Additionally, a modern theoretical model would need to explain the role of group creativity rather than just that of the individual. More patents, for example, are now granted to teams than to individuals. It is no accident that large corporations pay great attention to the role of team creativity in their activities. They know that team-working more consistently tends to produce creative results to order than solo-working. Nowadays, of course, architecture, film, theatre and television are group activities and, although we sometimes give an individual auteur the credit for a work, we are aware that it is essentially a product of many people of varying degrees of individual creativity working effectively together.

A computable model

While a generalised computable model is probably beyond our scope at the present time (if at all), it is possible to propose useful if limited models. It is also possible to say something about the general nature of models that are likely to be useful. I have used one particular model since the late 1960s for applications as diverse as architectural design, visual art, music, dance and drama. Essentially this takes a language-based view of art and assumes that our aim is to generate 'texts' of some sort.

It can be summarised by the formula:

Artwork = (V, G, S, I)

where V is vocabulary, G is grammar, S is selector and I is interpretation.

Depending on the sort of artwork to be produced, the vocabulary will comprise letters, words, phrases, drawn or three-dimensional objects, movements, body positions or parts of buildings. These elements represent a vocabulary rather in the way that words are the vocabulary of natural language. In order to determine the way in which the elements are to be interrelated to produce a 'text', a grammar is defined. This permits some arrangements of items of vocabulary and disallows others. The task of choosing which items of the vocabulary might potentially be used at any instant is in the hands of the selector. In everyday speech we select words from our vocabulary according to the meaning or message we wish to impart. Both meaning and the grammar of the natural language we use limit the choice of words open to us at any one moment and the grammar helps to minimise potential misunderstandings by encoding the information in conventional forms. But, in making an artwork, we are not trying to convey a message with the same sort of meaning as might be contained in a telegram. Ambiguity and multiple meaning are often actually useful in creative areas of endeavour and are frequently sought after. So, in this model, the process of selection can depend on things other than 'meaning' although, of course, a meaning flows out of whatever is done (as does 'emotion' or 'feeling'). There is another parameter to the model - the interpretation. (In previous descriptions of the model, for example, Lansdown 1970, I used the word 'presentation' rather than 'interpretation' for this). The interpretation takes into account the way in which the final results of the work are to be manifested. A program to produce graphic images has a quite different 'interpretation' to one outputting words or sounds and the difference cannot always be incorporated in the grammar. For instance, I could devise a program to compose music using the twelve-note vocabulary and eighteenth century grammar of conventional music. The results might be output: as a score for musicians to play, directly as musical sounds using the computer's own music chip, or via a MIDI interface as control to a synthesiser. The vocabulary, grammar and selector could be the same in each case. Only the interpretation differs.

The primary task in using the model is to devise vocabularies, grammars and selectors that have sufficient richness and variety to produce 'interesting results' - a process which is creative in itself. What 'interesting' means in this context, I confess, must be left deliberately vague although the work of Birkhoff (1933), Stiny and Gips (1978), and Holynski et al (1984, 1986) has some relevance when we want the computer itself to judge what might be interesting. Certainly, creativity cannot be evaluated simply on the novelty of its product: we can use computers to produce endlessly novel output - achieved by the simple expedient of randomly generating pictures or text or whatever. It is unlikely, however, that the results would be very 'interesting'. We judge creativity both by its novelty and its quality. Worth is very much harder to achieve than novelty - whether we use a computer or not.

The VGSI model, I have found, has a great deal of mileage in it. This is because each of its four components can usually be considered as independent entities. Thus a poetry generation program can, by simply changing vocabulary become a graphic generator. Alternatively, by using the same vocabulary but changing the interpretation, an image can become a piece of music or a dance script. Sometimes, of course, V, S and G are difficult to disentangle. For example, in dice music, the distictions between grammar and vocabulary become blurred. Because the musical phrases are designed to fit together harmoniously with one another in whatever order they are played, we can take the view that some of the grammar is embodied in the vocabulary. Distinctions are hard to make too when chaos theory algorithms are used (Lansdown 1991, 1993). In these, the grammar and the selector are intimately bound together in a special way that makes each step dependent on all of its predecessors. Thus, effectively, each new step in a chaos theory algorithm has a 'memory' of that which went before.

The nature of useful computable models of creativity

The idea of the machine having a memory or knowledge of its previous actions seems to me to be fundamental to the autonomous creation of interesting artworks. Chaos theory algorithms achieve this ability by their recursive nature. That is they are of the form:

Action (time t) = Action (time t-1) * modifier

Crudely, this is what happens when humans create artworks too and it is clear from my own studies and those of others, (in particular Scarrott, 1981) that recursive algorithms are likely to prove the most promising whenever a language model is used. Zipf (1949), in a series of investigations that perhaps pushed his concepts further than they could legitimately go, nonetheless convincingly showed that the use of words in any language follows a regular pattern - whether in newspaper articles, or plays or novels. (Similarly in music of all forms). The pattern is summarised as:

the frequency of any word is approximately inversely proportional to its rank, or rank x frequency = a constant.

This means that if we list the words in a large body of text in the order of frequency of use, the most often used word is used roughly ten times more frequently than the word ranked tenth. (Alternatively, if we plot the frequency against the logarithm of the word order we get a straight line). In a study Henry Fielding novels by Burrows (1989), for instance, the word 'he' is ranked eleventh and occurs 1194 times (11 x 1194 = 13134), the word 'when' is ranked forty-fifth and occurs 290 times (45 x 290 = 13050). The relatively low total number of words in Burrows' study accounts for a rather wide range for the constant - which varies from about 4000 to 14000 overall although for most words it is in the range 12500-14000.

Naranan and Balasubrahmayan (1992) derive a new version of Zipf's law that is more accurate for small samples, where the original version of the law breaks down. But it is now clear from a great number of studies that Zipf was correct in his view that meaningful texts follow predictable patterns. This knowledge can help us to generate meaningful 'texts' of all sorts. The recent work by Li (1992) showing that some random, meaningless texts can also follow Zipf's law does not devalue the main argument here. Where Zipf seems to have been mistaken is in his explanation for the existence of the patterns - which he put down to the principle of minimum effort. Thus, for example, he believed that short words occur much more frequently than long words in a language because we wish to minimise effort. It is much more likely that the regularity arises because of the recursive nature of grammatical language. If this is the case - and I am sure that it is - systems for generating meaningful 'texts' need to be recursive. We can achieve this end perhaps by using recursive algorithms for the selector but certainly by using recursive forms of grammar.

Significantly, if we are to insist on recursion as a mechanism, we automatically rule out Markovian (random and probabilistic) processes. This is interesting in that much of the early work on autonomous computer creativity used Markoff processes of some order. Hiller and Isaacson (1959) for instance used them extensively in what is arguably the first serious autonomous use of computing in music. Randomness has been a significant element in the generation of computer artwork ever since. The evidence of Zipf suggests its days are over.

References

Birkhoff GD (1933) Aesthetic Measure, Harvard University Press, Cambridge, Mass

Borsi F (1977) Leon Battista Alberti: The Complete Works, Electa/Rizzoli, New York

Burrows JF (1989) ''An Ocean Where Each Kind . . .': Statistical analysis and some major determinants of literary style', Computers and the Humanities (23) 4-5 pp309-321

Grattan-Guinness I (1992) 'Why did Mozart write three symphonies in the summer of 1788?', Music Review (53) 1 pp1-6

Hiller L and Isaacson L (1959) Experimental Music, McGraw-Hill, New York

Holyinski M and Lewis E (1984) 'Effectiveness standards for computer graphics', In Proceedings IEEE Small Computers in the Arts, IEEE Computer Society Press, Silver Spring, pp23-28

Holynski M, Garneau R and Lewis E (1986) 'An adaptive graphics interface for effective visual representation', In Requicha AAG (ed) Eurographics 86, Elsevier Science, Amsterdam, pp195-206

Klawans HL (1990) Newton's Madness, Headline, London

Konig HG (1992) 'The planar architecture of Juan Gris', Languages of Design (1) 1 pp51-74

Lansdown J (1970) 'Computer art for theatrical performance', In Proceedings ACM International Computing Symposium, ACM, Bonn, pp718-735

Lansdown J (1989) 'Generative techniques in graphical computer art: Some possibilities and practices', In Lansdown J and Earnshaw RA (eds) Computers in Art, Design and Animation, Springer, Berlin, pp56-79

Lansdown J (1991) 'Chaos, design and creativity', In Crilly AJ, Earnshaw RA and Jones H (eds) Fractals and Chaos, Springer, New York, pp211-224

Lansdown J (1993) 'Chaos, complexity and design applications', In Crilly AJ, Earnshaw RA and Jones H (eds) Applications of Fractals and Chaos, Springer, Berlin, pp207-214

Le Corbusier (1954) The Modulor, Faber and Faber, London Lester J (1992) Compositional Theory in the 18th Century, Harvard, Cambridge

Li WT (1992) 'Random texts exhibit Zipf Law-like word frequency distributions', IEEE Transactions on Information Theory (38) 6 pp1842-1845

Naranan S and Balasubrahmayan VK (1992) 'Information-theoretic models in statistical linguistics: 1-2', Current Science (63) 5 pp261-269 and (63) 6 pp297-306

O'Beirne TH (1967) Dice-Composition Music, LP Record SD888/1, Barr and Stroud, Glasgow

Porter R (ed) Dictionary of Scientific Biography, Helicon, Oxford

Scarrott GG (1981) 'Some consequences of recursion in human affairs', Proceedings of the IEE (129 Pt A) 1 pp66-75

Scholfield PH (1958) The Theory of Proportion in Architecture, CUP, Cambridge

Schillinger J (1945) The Schillinger System of Musical Composition, (2 Vols), Carl Fischer, Boston

Schillinger J (1966) The Mathematical Basis of the Arts, Philosophical Library, New York

Stiny G and Gips J (1978) Algorithmic Aesthetics, University of California Press, Berkeley

Storr A (1972) The Dynamics of Creation, Penguin, Harmondsworth

Zipf GK (1949) Human Behaviour and the Principle of Least Effort, Addison-Wesley, Cambridge, Mass


A version of this paper was given at the Digital Creativity Conference, Brighton, April 1995

John Lansdown's website: www.cea.mdx.ac.uk/cea/external/staff96/john/john1.html


Back to main article