Sunday, 5 December 2010

A new type of analysis

The familiar form of analysis relies on content. That is to say, an event and/or an object is observed, its details are noted, comparisons are employed, and a conclusion based on identified relationships is reached. That type of analysis is essentially content-based.

What if there was another form, one derived from a similar approach but now focusing on functionality, the type of behaviour and/or characteristics rather than the behaviour itself?

In the context of cognitive dynamics the use of functionality would be particularly productive since it is virtually impossible to observe every thought and idea - never mind their details. Let's examine what this means.

Complex, dynamic systems (and the mind is one example par excellence) have gained those adjectives because their sub- and sub-subsystems etc are multifaceted and in constant flux in terms of their mutual relationships and their degree of significance to the wider system. For the current purpose we label their subsystems and so on thought structures (TSs) because they represent the phenomena produced by the neuronal activities which at a higher level of interpretation we perceive as thoughts. Clusters of such activities form thought patterns, the basis for what we can label concepts, derived from a pattern of thoughts. On a lower level of the conceptual scale concepts are derived from functional domains, defined by the affinities among their processes.

Hence complexity is a summary descriptor of the extent to which the multifacetedness, its interdependencies, and its cognitive manifestations have been allowed to develop. The term can be applied to the entire system of mind or to any one of its parts, where differences in degree can and do occur. For example, the present text is the result of relatively high-complexity cognitive dynamics, but ask me about cricket and I wouldn't have a clue.

The nature of complexity is such that any one of its manifestations can grow further from the general input because the contributing TSs' variance assures that most - if not all - can process some of the input. The extent of the development depends on the quality of input, the mutual relatedness of the TSs, and their inherent latency (see previous post) - all of which are describable in terms of their specific functionality, rather than content.

Let us now concentrate on the relationship between input (in this case the communicated result of somebody else's TSs) and its effect on the side of the recipient and the TSs there.

The source of the input may well consider it to be homogenous, but to the set of TSs in the recipient with their respective domains the input represents a multitude of sub-contexts. Yet if there is intent behind the source's output (ie, its TSs form an entire pattern) then the recipient's expectations regarding its effect may well be misplaced, although any TSs on the recipient's side would not possess the wider context to recognise this.

Similarly, if there are separate articulations coming from the respective target TSs then these articulations will reflect the expectations or confirmations resulting from their individual processes. This can be scaled up: substitute the TSs with humans and the patterns with groups of people and the degree of complexity rises further.

The same principles apply, but now the probability of variance among the domains has risen. Now there are TSs within the TSs, and domains can and do overlap. Whether the overlaps are recognised as such is another question, leading to the misunderstandings referred to earlier, although once again they are not necessarily apparent to their source.

Such misunderstandings can lead to unwanted transfers, where content representative of a relatively higher abstraction level can be directed to a domain at a lower level (for abstractions see previous post). Given the propensity for affinities between abstraction levels in any case there is a considerable chance the resultant TSs will be incongruent (for example, try explaining the concepts of higher ethics to a young child when it has done something wrong). Since the context in relation to the source is now dispersed across the target domains a re-tracing of how abstraction levels formed for the purpose of clarification is made that much more difficult.

Nevertheless, it is possible to establish the relational structures of the TSs, including their respective abstraction levels and their mutual differences. While not enabling a source -> target analysis they nevertheless provide a unique snapshot of the TSs' configuration at the time. Therefore they represent a unique 'fingerprint' of a person's and/or a group's or indeed a society's conceptual organisation. Still, it is only a snapshot. Any subsequent input (either from the outside of from among the subdomains) will modify the general structure.

Given the transient nature of the resultant framework, is a comparison between two such frameworks possible?

Since the precise source -> target relationships cannot be identified, it is impossible to trace previously established relationships in order to find the newer ones. On the other hand, although we don't have recourse to a time stamp, we do have affinity relationships and their abstraction levels, and they can be identified. Because both phenomena are in a state of constant flux they are therefore subject to the constraints of time-related dispersal across their domains. That is to say, if we compare the structures of sub-domains and observe their linkages, those that are dispersed to a higher degree (ie, had more time to create the linkages) will most likely be those that had occurred before the less dispersed ones. Setting a cut-off point along those lines for both of the sets (comparing set 1 with set 2) leads us to a useful normalisation.

This approach cannot be regarded as failsafe, since in the end we do not know to what extent each TS could have been able to relate to any other structure. Still, they give us a general picture of the results of the cognitive dynamics in existence and any marginal errors can be tested for by re-setting the cut-off points (while not telling us anything more about their history it allows us to disregard the more compact structures in favour of those that did form a more comprehensive network).

The question arises: can such an analysis be done via a simulation (eg, a version of the OtoomCM), sufficiently scaled-up to permit comparable input to be processed. Part of the answer lies in the definition of 'comparable'. No doubt an exact replica of the real set is impossible for obvious reasons. Whether a pared-down model will be informative and to what degree can only be ascertained through trials, using real-world data. Yet whatever the ultimate outcome, even simpler versions of the real should reveal cognitive imprints that tell us something about their origins.

Note: the above text is rather dense. A familiarity with how the mind works would certainly aid in its understanding, but I attempted to convey - however successfully or otherwise - how the concept of functional analysis can be applied to characterise thoughts, concepts, individuals, demographics and societies.

No comments: