ShapeWorld
Some sample frames in the original 1280 x 720 resolution:
The purpose of the ShapeWorld module is to allow an artificial mind to express itself. In this case the new construct represents an updated version of the OtoomCM program and its derivatives (see under AI Programs above). See OCTAM for screenshots, the manual, and comments on its functional aspects.
What 'language' should an artificial mind have? One could opt for the movie version where androids talk like humans (not that there couldn't be a time in the future when material androids - rather than the animated versions in videos - do indeed speak the way we do). However, let's focus on a truly artificial mind first. A mind that possesses cognitive dynamics yet exists in its own right. One might call this mode emergent AI, in contrast to generative AI which works with already existent content generated by humans.
There are obvious differences between organisms on this planet. On the one hand, they all are based on the common laws of physics, chemistry, and the rules underpinning nonlinear systems in general. On the other, their particular physical configuration together with their brains' overall capacity make for a distinctive language, a form of expression that during the evolutionary timeline has settled into a symbiosis between their cognitive capacity and the body's framework. Hence insects can be differentiated by their intrinsic sounds, the same applies to dogs and cats, and even in humans there are, broadly speaking, differences between males and females, young and old, big and small. The same can be said for intelligent life forms outside our own world.
How organisms, including humans, express themselves is a function of the above symbiosis; what they express is the result of their cognitive processes.
Given the complexity relationships between the two it would be useless trying to impose a human-type language upon a system that does not have the appropriate complexity to process information and expect the result to manifest itself as a linguistic framework requiring a degree of complexity the system simply does not have. Therefore the current attempts at human-like sentences coming from an AI-based algorithm are based on code sequences which have been designed by programmers; they are a top-down implementation (which is not to diminish the programmers' skill). A truly artificial mind however creates output that comes from within, a bottom-up approach (hence the label 'emergent AI'). Just as any organism has its own language because that's what has emerged over the millennia. And so to ShapeWorld.
Since our AI program features cognitive dynamics yet answers to formal algorithms in the form of computer code, it makes sense (at least in my opinion) to use a language that is similarly constructed. There is the formality of the code, but there also needs to be a flexibility which is responsive to the virtually infinite variety the AI engine is capable of at its own level of complexity.
ShapeWorld makes use of the eight basic shapes found in nature, indeed in all our architecture: plane, block, pyramid, cone, sphere, cylinder, spiral, torus. Any shape, however complicated in the end, can be deconstructed into those basics. (Strictly speaking this is not entirely true: a torus can be deconstructed into a curved cylinder for example, but in general they all have their specific algorithms, and to jump from one to the other one algorithm has to be exited and another entered)
ShapeWorld uses one single algorithm that includes sections for every one of the shape types, and which shape appears, or which combination of shapes, depends on 27 parameters making up the sections. Therefore any shape, however distinctly expressed, is a function of the combined influence of those 27 parameters. The parameters are continually defined by the AI engine's output. As one value has been selected, there is an incremental movement from its previous value to the current one, at which point another value is randomly selected, and so on.
The shapes can be compared to an organism's body language; they are the visuals. Then there is the sound. How does an artificial mind sound like? Or, in this case, how does a sphere, a cylinder or cone for that matter, sound like? Since the entire system is a construct to begin with, derived from the considerations mentioned above, the sound should follow similar lines.
Each shape has its own particular sound, its own wave spectrum. The algorithm is loosely based on the specific form (ie, round, elongated, angular) but apart from that uses equations capable of producing sound coming through the speakers. In other words, there is no recording that has been made from some sound-producing 'thing' and subsequently been modified. Certain expressions within the equations contain parameters which are defined by the values defining the shapes on an on-going basis, so a change in a shape is reflected in its sound and a combination of shapes is reflected in a combination of sounds (a mix). Further considerations: Of frogs and things.
Producing all the shape types through a single algorithm required a certain re-think on just about all geometric aspects. Colouring and texturing made use of shaders applying various methods so that features such as shadows and reflections could be applied in one render pass and still maintain a reasonable frame rate.
© Martin Wurzinger - see Terms of Use