Otoom banner
Home
Downloads
Books       
Downloads
Whats new              
First-time visitors      
On the origin of Mind 
Synopsis                  
Applying Otoom        
Further developments
The Otoom fractal     
Android                   
CV                         
About
OtoomCM program
OMo program      
OWorm program   
OSound programs 
OVideo program   
Shapeworld (teaser)  
OCTAM    
Programs Search What's new Parallels FAQs Basic Charter the social experiment Otoom blog List of blog topics other programs other programs Forum Mayaroma Museum Links Contact
LinkedIn icon
Otoom blog
on Facebook
discarded-full-sm.jpg 5g-navtheworm.jpg 5g-navthemindwhats.jpg 5g-navmyhome.jpg 5g-navtheisaa.jpg 5g-navsomething.jpg 6g-navdontreadthis.jpg Freedom uses collective knowledge...
Home  >  About  >  Further developments

Further developments

Where to go from here?

The functional perspective of Otoom makes the model applicable to a wide range of scenarios - whether the objects happen to be biological entities such as neurons or entire organisms, mental manifestations such as thought structures, or collective articulations on a societal level.

Each area offers opportunities for faculties and institutes, whether their staff are researchers or students. Questions can be addressed which before could not even have been asked let alone answered.

Computer program:
Simulating the dynamics of attractors and affinity relationships on a computer allows changing the various parameters at will just to see what would happen. Clusters and domains can be isolated, subjected to different Menus and indeed different input in order to compare them with particular sections of the biological brain. Of course, to approach real life the scale of the setup would have to be changed considerably, up to hundreds of millions of nodes. Moving the code from the current linear mode to a distributive one becomes mandatory (its modularity ensures this can be done with a minimum of fuss).

Development has started on transposing the original OtoomCM program into a parallel format using CUDA. Derived from the original C++ code the matrix operations are being distributed across the GPU.

In- and outputs can be chosen for whatever purpose, whether they be of the visual, aural or any other sensory type, plus mechanical devices. As long as the effective input to the matrix can be translated into integers, and as long as the integer output from the matrix can be translated into the appropriate data type for the output device, there are no limits to the type of interface. This means the way is now open for a truly humanoid robot.

Once in distributive mode it doesn't really matter whether domains are linked within a matrix or across the internet or any combination thereof. An artificial mind, distributed across servers and home computers, linked in real-time to whatever in- and outputs are made available - that's the future!

One major problem with simulations of any kind concerns what could be called the functional autonomy within the system, or rather the lack of it. Although in the Otoom versions the various states emerge from within (in contrast to the top-down approach where the state-altering algorithms have been supplied from the outside, ie by the programmer), the model still suffers from an all-encompassing architecture which defines the system's character per se; the same applies to the sub- and sub-sub-etc systems. As an analogy consider this question: how does a drop of water know it is a drop of water? In other words, why does a bunch of molecules of a certain type behave according to their innate nature? The answer is functional autonomy, the self-contained characteristics of those molecules which force them to behave in a certain way and no other. Whether there are a few hundreds of them or billions, no extra resources are necessary for the manifestation of what is essentially 'water' because each molecule answers to its self-same potential.

To transpose the analogy into the computer model would mean that each node is essentially a computer in itself, programmed in terms of that node's functionality. Current technology requires a node to be part of a program, and the program to be installed on a system designed to run it. To put this another way, not until nanotechnology permits the design of a functional architecture at that level of scale, will each node in the model 'know it is a node'.

Artificial mind:
On a more profound level one could investigate the re-representative expectability during the formation of domains, their consistency in terms of the affinity relationships with other domains, and hence the abstract-forming potential given certain parameter combinations at certain scales. Since the state of each node can be formally ascertained at any step within any cycle (as long as you don't mind many megabytes of text files) their respective phases can be formally identified; so far we were reduced to guess about such things within the utterly vague - and biased! - conceptual space of the written or spoken word.

A corollary to the above would be the question of latency. Just how is the trace of a memory laid down among the node elements so that its re-representation can emerge through an affinitive input? And what does 'affinitive' mean anyway? And going even further, since latency does exist regardless of any future evocation, 'how much' latency can be packed into any given domain? Surely, there must be a probability envelope of possible appropriate future input outside of which the system becomes unresponsive. The next question then is, would something like the progression lock (the limited optionality of further developments due to already instantiated developments - observable in thought structures from the level of individuals to society at large as well as in biology) be applicable within the context of latency?

So far two types of media are being used - the neuronal framework of the brain and the matrix nodes of the computer program. Both exhibit re-representative states as can be shown. The question is, what other types of media exhibit similar qualities? Or, to put it differently, what other types of media are capable of re-representational abstractions of environmental input? If abstractions can be defined as intersections of functional commonalities, latency could be interpreted as a superset of possible optionalities for any given environment. The ramifications are immense.

Human behaviour:
Using the model to focus on group behaviour and its dynamics in society the viability of so many political views and plans can be identified in a far more formal manner than was possible up to now (whether politicians would want to know about it is another matter). Otoom does not necessarily create a pacifist world since it does not invent some utopia but merely describes what is, yet it does allow the options to be weighed that much more accurately and objectively because the perspective is a formal identification of the mental picture standing for those options.

Right now there are wars going on whose costs exceed the budgets of many entire nations together, with no clear idea of what can be achieved and how; there are measures taken world-wide driven by ideology that seek to define our moral values achieving nothing but destruction decade after decade; and initiatives exist which seek to 'help' but often are either interpreted by the recipients as intrusive or make matters even worse.

Because Otoom's conceptual tool set is unhindered by cultural and ideological filters, considerations about the functional characteristics of demographics and how close these may be to a demographic's core identity, can now be entertained more productively. The question assumes a high degree of significance whenever the people of a region are subjected to a rule set decided upon somewhere else, regardless of whether the intent had been benign or malicious.

While aggressive behaviour on that scale is hardly conducive to rational arguments as found under Otoom, even positive initiatives need to be handled with care. For example, one of the major issues affecting demographics worldwide is climate change and the measures to be taken. The effectiveness of any plan depends very much on how well the recipients perceive and understand its content.

In essence the question revolves around the reality of existing customs, values, and habits, and how to manage them in order to prevent too much damage on the other side. This relationship works both ways. The same goes for those customs etc which are valued in their breach.

Since Otoom enables the functional distances between identity and other features to be ascertained, the dangerous waters of a threatened identity would be avoided (at the very least, their proximity can be recognised). So far a major source of conflict has been the inability of political decision makers to understand when their ideas constituted a threat to someone else's identity; in other words, which of their measures represented a perceived danger in relation to which elements of a people's culture. Often the overall intent had been a destructive one anyway and so the measures were actually meant to harm. On the other hand, in many cases the intent had been a more or less constructive one, but the consequences led to destruction nevertheless. The originator is usually the last to understand the true situation (the war in Iraq is a poignant example).

See the Parallels for the type of scenarios that can open themselves up to scrutiny.

Much of the uncertainty would be removed if there existed a comprehensive, functional profile of every demographic on this planet. Such a catalogue would be far more analytical and objective than the attempts under the auspices of one or the other culture and/or religion. Companies, security services, and educational systems already use profiling of individuals. Otoom enables the practice across the various scales.

Life on this planet has achieved a degree of complexity that requires a much more comprehensive knowledge base than was imagined in the past.


© Martin Wurzinger - see Terms of Use