Space-Time Dynamics of Human Language (4/4)

August 17, 2017

Language has been referred to as a game between a speaker and a listener where the speaker makes an assumption of hearer’s mental context and develops the next utterance to influence hearer’s mental context to come closer to speaker’s mental context. Likewise, the hearer does the same with speaker’s mental context. Good news is that this game terminates rather fast – hearer understands the meaning and intent of the speaker in finite time. Almost every linguistic computation necessarily terminates. Whether the intended effect of meaning and understanding is conveyed or not, the termination of linguistic processing is guaranteed.

The phrase “space-time dynamics of human language” can be interpreted in several ways:

  • A way to measure the complexity of processing human language as a computational process using a certain computational model.
  • Mechanisms to represent space and time dimensions in human language
  • Syntactic and structural constructs of lexical items in order to influence the space-time aspects of the underlying Talent_scene.

As the reader would have guessed, we are looking into the third interpretation – what are the structural linguistics mechanisms, markers and processes that can help qualify space and time attributes in way to compute the task of meaning evolution? There are two core principles that we will rely upon to build out our argument that follows:

Axiom 4 (Syntax) : Syntactically incorrect statements are necessarily ambiguous or plain nonsense.

Axiom 5 (Sentence): Sentence is the minimal complete unit to transfer meaning from a speaker to a hearer.

Syntax axiom is self evident from language acquisition process of a human child when incorrect statements are corrected to a point by when utterances are usually conformant to the grammar prevalent for that language.

Sentence axiom needs some explanation since sub parts of sentence like words and syllables are often seen as units of meaning. Some useful pointers that support this axiom are as follows:

  • Sanskrit grammarians like Nagesha Bhatta have explicitly stated that fragments of a sentence are in grammarian’s imagination while sentence alone conveys complete meaning.
  • Sentiment extraction as an automated computational linguistic process needs at least a sentence to determine tone and intent of the utterance. An entire discourse helps to determine the sentiment better.
  • Quality of Text processing using Machine Learning improves when longer sentences are parsed.
  • When a speaker utters sentence fragments, a listener fills in necessary wildcards with appropriate ones based on the context in order to follow a speaker’s intent.

Moving from a sentence fragment to a completed sentence has been indicated as context dependent. We can go one step further using the vocabulary we have developed so far. Context of communication for us is a Talent_scene that is shared between a speaker and a listener. More precisely, the same set of Sense_scenes need to be evoked in order to arrive at a successful communication. For example, I cannot point out beauty of birds flying when my teenager is busy looking into his device. There needs to be a shared Talent_scene that has to co-evolve even as language communication points out to nuances of consensus that needs to occur.

In literature shared Talent_scene is referred to as shared attention. Shared attention in classical sense could lead to an infinite regress as mental states of speaker and listener keep approximating each other – much like Zeno’s paradox. And just like how Zeno’s paradox is resolved by resorting to terminating sequences that converge to a limit, shared attention has to necessary converge. Talent_scene offers a simple 2 step convergence by reaching a consensus between the Sense_scenes of a speaker and a listener. First step is to ensure that the same set of Sense_scenes are evoked. Second step is to arrive at consensus between speaker’s Sense_scene listener’s Sense_scene. While feature sets and configurations of speaker and listener can differ, consensus requirement here means coexistence without contradictions. Thus, we don’t seek a perfect equivalence but more a reasonable mapping that is consistent with speaker’s and listener’s experience and attention.

Without communication of any kind, possibility of convergence between Talent_scenes of distinct individuals is random. Predictable convergence is only possible when there is a nudge in the right direction. Since communication exists in almost all species at different levels of sophistication, converging to a shared Talent_scene is a universal observation across all life forms. Here again we remind ourselves that our focus is on human language communication and how human language can accelerate this convergence to a shared Talent_scene across individuals or communities.

What we are looking for in human language is therefore some markers that can align Talent_scenes. By now reader would have guessed that I would be using space-time as my primary dimensions. Objectivity of space-time is well established in physics. Philosophers over the years have grappled with space-time experience of humans and have accepted the inevitable conclusion that we experience space and time in a similar way. In our vocabulary, Sense_scenes are built from senses which have same bio-physics construction and have same bio-chemical basis. Given that sensory apparatus is the same, we can safely assume that for individuals without impairments, there exists a reasonable morphism between features sets and configurations of one individual to another and vice versa. At Sense_scene level, there should be no confusion that similar topographic maps are created. So, applying the yardstick of space-time as a convergence accelerator for Talent_scenes is sound.

Let us now turn our attention to two prominent categories of lexical terms – verbs and nouns. While English as a language has been extremely well studied, it is also notoriously difficult to analyze from a structural linguistics point of view. My own simplification to this conundrum (although I am fluently expressing what I think in English in this blog!) is to revert back to Sanskrit as a classical language. If the reader has not noticed earlier(!), I have repeatedly reverted to Sanskrit grammatical literature to substantiate some crucial points in my argument so far.

Sanskrit (and Latin I am told) offer morphology at the level of verbs and nouns. Without going into too much detail, let me summarize my key observations on Sanskrit Verbs and Sanskrit Nouns are as follows. To keep the focus on the lexical items, I drop Sanskrit as the prefix since for rest of this paragraph, I refer to only Sanskrit Verbs and Sanskrit Nouns.

  • Verbs represent actions. In our exploration context, this encodes time aspect of the scene.
  • Nouns represent entities and their relationships. In our exploration context, this encodes spatial representation of entities.
  • Verb morphology is a result of Mood/Tense (लकार), Number(वचन) and Voice(पुरुष)
  • Noun morphology is a result of Declension (विभक्ति), Number(वचन) and Gender(लिङ्ग)

In short, every sentence has clear space and time representations that scope the alignment criteria for the associated Talent_scene.

Need for Statement axiom becomes clear at this point since spatial representation and interaction model of various entities cannot be deduced successfully based on fragments – all entities and their relations in space and time are needed to construct an unambiguous scene. Here I use success not in the sense of possibility to create a scene – but rather to create a scene the speaker intended. On the other hand, note that there can be several consistent scenes that can come from one complete sentence. Similarly, several possible sentence fragment continuations can be predicted each leading to several consistent scenes. Economy of representation and efficiency of convergence towards meaning demands that as few scenes as possible are realized as part of meaning task evolution. A further constraint on number of scenes is offered by the evolution path of scenes – not every scene can be a valid successor. So, with three dimensions of constraints – spatial, temporal and evolving context of meaning – a speaker is able to successfully convey her intent to a listener.

In summary, we have identified space-time markers embedded in a sentence. These space time markers align Talent_scene of speaker and listener leading to a possible set of scenes that can be derived from a sentence in isolation. This set is reduced to exactly one in the mind of the listener by applying constraints of meaning evolution based on listener’s experience. As meaning task gets successfully executed, the net meaning determined so far is represented as current level of alignment between Talent_spaces of the speaker and the listener which then becomes the evolution constraint for the next sentence.

Even as speaker and listener seek to reduce their differences of interpretation based on their shared Talent_space, misunderstandings can occur. Discovery of misunderstanding is when space-time markers of speaker’s utterance violates some consensus condition in the Talent_scene evolution of the listener. This leads to backtracking to a point where the listener diverged in the evolution of the Talent_scene as compared to the speaker. A few corrective statements that rejig the Talent_scene leads to conversation proceeding as intended.

With this exposition, we have arrived at a way in which meaning is fully embodied and is at most a two-step computation from a linguistic expression to the underlying sensory experience. I have taken a simple route to express the relationship between lexical terms and sensory experience. It does look like a baby’s vocabulary to me – simple nouns and simple verbs that have simple meanings coming from direct sensory experiences.

For English readers, there might be a sense of disappointment since I have used Sanskrit as my language to base my arguments. One observation that I gathered from my study of English language is the plethora of prepositions. With prepositions, English has effectively done away with noun morphology entirely. Thus, with verbs to represent the time argument and prepositions freeing nouns from transforming, English has been able to spectacularly spread across the planet. Two points that I would like to highlight about English are:

  • Prepositions have liberated English from noun morphology. This has led to free borrowing of nouns from other languages or experiences. Furthermore, nominal verbs – or verbs derived from nouns – are constantly being created.
    • For example, Google as a company that made internet search engine has lent itself to a verb form “go google it” to indicate “do a web search”. Xerox in the previous decades enjoyed a similar synonymity with photocopying.
  • Cost of removing noun morphology has been word order and punctuations. Without clear word order, the associated Talent_scene convergence between speaker and listener can quickly go from single evolution steps to multiple evolution steps. And that leads to lower efficiency in transmission of intent from a speaker to a listener.

Several generations of scholars have explored this space seeking to find some tangible pattern to capture the miracle of human language. In my own way, I have put forth a theory that I think is simple enough to understand. Embodied linguistics is a clean way to keep lexical semantics computable in at most two steps. I have demonstrated this for a baby language that is direct and where Talent_scene is a simple union of underlying Sense_scenes. Just as a baby moves from a limited vocabulary and limited grammar to a level of acceptable language fluency, I see this argument improving overtime with more details on various devices we see in human language. Before I close, I would like to remind readers that semantic validation is a consensus seeking protocol and prediction is a possible space-time continuation memory pattern.

This completes my 4 part series that introduced readers to embodied linguistics, explored situated awareness and scenes, introduced concept of Talent_scene that is a foundation for grounding shared meaning and finally identified space-time markers in human language to complete bi-directional mapping between Sense_scene and Linguistic expression.

Advertisements

Talent (3/4)

August 17, 2017

Starting from simple axioms on embodiment of language and perception being grounded in situational awareness, we constructed Sense_scenes as a topographic representation of sensory data. We also accepted use of term scene as relevant due to its spatial (snapshot) and temporal(evolution) aspects. Key observation we pointed out was that Sense_scene is rooted in the reality of sense perception. We stretch the idea of detection and encoding of sensory signal to encode a proto-meaning construct. Sense_scenes are independent of each other due to mutual exclusion on the sensory types. This offers an opportunity to combine the Sense_scenes as long as certain fundamental aspects are preserved.

By construction of the Sense_scene, we know that there are two common dimensions across Sense_scenes namely the spatial and temporal dimensions. Space-time is so fundamental that its importance cannot be under-estimated. As this theory progresses, space-time could be our link to establishing subjective and objective experiences of language.

Sense_scene is sense specific. In neuroscience literature this is referred as unimodal cognition where the single mode refers to the single sense involved. We briefly touched upon many intelligences that human body can express. Since each of these intelligences are built on sensory building blocks, it is reasonable to expect a relationship between Sense_scenes and body intelligences. I will use talent to indicate the a body intelligence. Alternatively stated, multiple intelligences are the same a multiple talents. A simple approach would be to consider Talent_scene as a consistent combination of a selected Sense_scenes. One expectation is that Talent_scene is internally consistent on the primary space-time dimensions. Second expectation is that the derived proto-meaning combination that arises due to Sense_scene combinations leads to meaningful multimodal representations.

Note that I have subtly moved from Sense_scene proto-meaning to Talent_scene meaning the moment we moved from unimodal to multimodal representations. This shift is consistent with the definition of talent – it is susceptibility to encoding or symbolic representation. Coding here is the second order integration of Sense_scene feature sets and configurations that moves past to Talent_scene description which is internally consistent. As an example, the eyes reporting that limbs move in opposite direction to what proprioception is reporting is indicative of either misinterpretation of information or indicative of impairments. Vertigo is a classic situation where proprioception indicates standing on stable ground however, the distance to the ground as computed by eyes creates a sense of loss of balance. Going past these spurious overrides of perception and replacing them with true and consistent acceptance of Sense_scenes that contribute to Talent_scene is what helps a talent to reach higher levels of expertise in that talent.

Conclusion that I am driving towards is that the first cognizable form of meaning arises at the level of Talent_scene, or in general, in a consistent combination of Sense_scenes. Conversely, anything that can be perceived as meaningful will have to eventually terminate in its pointer chasing to a Talent_scene representation. Since the link from the Talent_scene to the Sense_scene is rather immediate, meaning is eventually grounded at Sense_scene level. Thus, our constructive approach from senses to Sense_scenes to Talent_scenes leads to one embodied explanation of where meaning eventually is realized in the linguistic cognition process.

Let us investigate the embedding of Talent_scene in the space-time dimensions. Internal consistency means that Sense_scenes don’t contradict the inferred proto-meaning of each other. This is self-evident based on the fact that senses are by definition independent. In other words, eyes cannot verify what ears hear. At the same time the sampling frequencies of each of the senses can be assumed to be different allowing each sense to accommodate information from other Sense_scenes to get integrated into the Talent_scene. Another advantage of having different sampling rates allows for differential treatment of attention to different Sense_scenes. Attention as a cognitive capability is a study by itself which is not directly related to emergence of meaning from utterance but does have an influence. So, we will not be investigating attention anymore.

As Sense_scenes evolve, so do Talent_scenes. Consequently, the predictive properties of Sense_scenes as learned/experiences continuations offer predictive evolution of Talent_scenes. In order to have efficient recall to support prediction faster than the occurrence of the event, calls for the next axiom on memory representation.

Axiom 3 (Memory): Memory is stored as space-time patterns

The consequence of memory patterns in space-time is that a “thread” of continuity is picked as opposed to assembling a sequence of multi-branch transitions. The continuity thread can have arbitrary look-ahead. Experts in various talents have essentially mastered the multi-step continuity representation in memory as patterns of possible evolutions. This allows them to freely combine various patterns that allows them to predict different outcomes offering them a multiple options to choose from. Think of a freestyle jazz musician or street dancer or a chess Grandmaster. The difference between an expert and an artist comes with the dexterity of scanning through the memory pattern space and stringing together unique expressions that create an aesthetic effect. Without having the possibility of memory as a space-time pattern, these artistic expressions would be harder to explain.

Memory axiom tells us about an efficient form of memory that is useful in our context. At this point we don’t yet have the machinery to describe memory or its recall in more detail. That is why we take as a consequence of memory axiom the possibility of recall of the entire memory pattern. Once I have learned more about human brain and memory systems, I shall come back and revise this section to reflect my latest understanding. At this point, I politely seek excuse from the reader for being not precise here.

Dynamic space-time representation of memory does not mean that state representations are not important. Attentive reader would have noticed that we started with feature sets and configurations that constituted the Sense_scene. At the level of Talent_scenes, feature sets and configuration of the ensemble of Sense_scenes is a good candidate for the state of that Talent_scene. Why are states of Talent_scene important?

We have posited that meaning originates as proto-meaning at the level of Sense_scene as a consistent feature set and configuration. We have transcended from Sense_scene to Talent_scene as an abbreviation of relevant Sense_scenes to be combined to a give rise to meaning at Talent_scene level. At the same time, we have insisted that there should be consistency maintained at all times. Alternatively, evolution of meaning can be seen as a consensus task specification of a distributed system. This formulation opens up a the possibility of interesting ideas from Maurice Herlihy’s work characterizing distributed systems using combinatorial topology. At this point we will not delve into what options that direction of investigation can lead us to. However we note at this point that combinatorial topology could provide some insights into organization of human brain to support computation of lexical semantics.

Our bottom-up constructionist approach has got us to a point of Talent_scene which is a useful construct. It is sufficiently archetypical since it is closely linked to Sense_scene. At the same time, being inherently multi-modal, it offers a concrete setting for lexical meaning to be grounded in. In order to complete the relationship between the meaning representation of Talent_scene to the meaning representation of linguistic expression, we need to approach top-down from language perspective.

In our next essay we will explore structural linguistics to reveal insights that can help establish the connection between meaning embedded in a Talent_scene and the meaning indicated in a linguistic expression.

Situated Awareness and Scenes (2/4)

August 17, 2017

The primary objective we seek to resolve is almost instantaneous miracle of human language generation and comprehension. In my previous post on Embodied Linguistics we started on the journey to explore how lexical semantics can be grounded in human body. We opened up with a self-evident axiom on embodiment of human language, viz.,

Axiom 1(Embodiment of Language): Human Language is an expression of humans.

Embodiment of Language axiom scopes the conversation to focusing on how human language has come to being in the context of human body. The import is that without the confines of human body, human language could have some unexpected consequences. Determination of meaning of a word when language is expressed in other language terms is a case in point. At some level, there is an expectation that meanings of primitive words are just known. And with that basic premise variety of interpretations of a variety of expressions are assimilated into the knowledge base. So, how to establish the primitive meanings?

Psychologists and Linguistics have studied by language acquisition process in human infants to determine when a word is born in a child’s mind. The question that has been raised is whether there needs to be an existing corpus of linguistic knowledge pre-existing in the human brain or is human brain a blank slate. Experientially, understanding language acquisition gives insights into learning processes and working with disabilities. Given that verbal-linguistic is one intelligence of the many intelligences, understanding evolution of linguistic abilities can help children have a better quality of life. One aspect of human child development is related to complete dependence of human infant on its parents. Human babies take about 12 months to achieve independent locomotion and a couple of more years before they can communicate and interact effectively. First 5 years of a human baby development is a beautiful time one can spend with the child.

What ever else we can hypothesise, there is one inevitable fact for a human baby – it is born with a human body. Human body is naturally equipped with senses that allows the human child to interact with the environment. Although the expressed responses of a human child seems limited and basic, we can conclude that there is some perception that helps the child locate itself in the world it exists in. This observation leads us to the next axiom:

Axiom 2 (Situated Awareness): Perception starts with situated awareness.

Approaching perception as an outcome of sensing and making sense of environment of habitation, senses need a substratum to express their determinations. To make it simple and as general as possible, we can let this substratum be the space-time continuum. So, we define situated awareness as the space-time representation of what the senses have sensed.

Let us make this a little more concrete. Humans have 5 senses – auditory, visual, olfactory, gustatory and proprioception. Each of these senses can extract features, and mutual configuration of these features due to their possible co-existence. Dynamic nature of the environment of existence demands a combination of a snap-short representation and possible continuations of those representations. Philosophers have offered several viewpoints and have debated extensively on the nature of these representations. For our purposes, we shall keep it simple and look at two dimensions – a space dimension to support a snapshot of the environment and a time dimension to record the evolution of the snapshot. Having established the relationship between representation and space-time continuum, let us view space and time from how sensation occurs.

Sensation needs a substrate or medium in which it needs to detect an event. Static is boring after the first observation. Most senses have evolved to a point where the only interesting thing that happens to a sense is where there is a change or an event in a medium of expression. The medium defines the scope in which that sense can detect signals from the environment. Expanding on this analogy we get the following (draft version) table for the various human senses:

Sense

Auditory

Visual

Olfactory

Gustatory

Proprioception

Space or Medium

Sound conductive medium

Visible Space

Air

Ingested Good

Self-Boundary

Time or Event

Sound

Light

Chemical

Saliva-Digested Chemical

Musculo-Skeletal movement

The details of the Space-Time continuum to represent the signals detected by the senses is important at a biological level of interaction. However, for our purposes, the fact that the outcome of sense-making of the environment is a set of features and the associated configuration would suffice.

As granularity of perception gets finer, more and more features and configurations emerge. Experience, learning and labeling contribute to this increase in features Sensory information conflicts are eliminated at situated awareness through valid configurations. One can imagine, therefore, a snapshot unfolding for a particular sense as signals are processed at a point in time and over a time period. For sake of simplicity let us make the assumption that snap-shot sampling is slower than sense sampling. Formally, scene is a snapshot of sensory perception across all senses. Thus, we can imagine a scene as a topographic representation of sensory perception limited to a sampling interval that includes the sampling frequency of the senses and the integration time to juxtapose all the topographic representations across all senses.

Scenes are important to our analysis moving forward. Using the term scene seems appropriate in this context as a scene typically contains static aspects and dynamic aspects. Scenes offer time ordering, branching and branch predictions. Some scholarship evidence to support use of scenes in this context include:

  • Leonard Talemy’s Force Dynamics is a visual language that uses spatial representation for entities and interaction forces.
  • Spota theory of Bhartrihari[2] indicates 4 stages in creation-destruction of sound.
    • Parā: The origin and final subsumption part of the lifecycle of language communication
    • Pashyanti : First “picture” – still subconscious – of what needs to be articulated
      • Pashyanti contains the inherent impulse toward expression in time and space.
    • मध्यम Madhyama: First “mental” representation
    • वेकारि Vaikhari : Eventual communication
  • Latest works on Cognitive Grammar are increasing using 2D Visual Representations with annotations to express linguistic operators to go from structural to semantic interpretation of sentences.

With this we have established that at a sense level there is a Sense_scene that gets constructed as a topographic map of the features and configurations deduced from the signals received from the environment. Situated Awareness can be seen as a consistent combination of Sense_scenes. It must be clear that Sense_scene is a construction based on the sensory inputs. To bring back the idea of meaning, Sense_scene are proto-meaning constructs. If a signal is detectable and recognized, that signal has meaning for that sense. This is an important observation as we start to see what can we do next with Sense_scenes.

Interesting experiments have been constructed to show how much human mind can be fooled into believing alternate representations of Sense_scenes. Rubber hand illusion is a classic experiment where visual_scene can led to mistaken proprioception_scene. These and other experiments reveal brain wiring and how human mind works. For our purposes, these are interesting detours about quirks of human brain.

In our next essay we will look into how to move from Sense_scenes and proto-meaning as encoded by senses in their topographic maps to higher order cognitive constructs.

Introducing Embodied Linguistics (1/4)

August 17, 2017

Language – as we common refer to it – is uniquely human. In fact, we can go so far as to say that human without language is no better than a humanoid or an ape. We can very well imagine that the day when language was born was the day when memetic evolution started to dislodge biological evolution. The result of this change in pace of evolution is spectacular as we experience our world today. We have come a long way from being a caveman to an urban citizen thanks to language.

For as long as language has existed, people, philosophers and scientists have been puzzled by how what seems so effortless to a human is so hard to explain in biological terms. Amazing advances in computing and formulation of Alan Turing’s test in the last century has driven a whole generation of scientists feverently looking for computational models that can best approximate human brain functioning and thereby create artificial intelligence (A.I). The efforts are laudable and have recently seen resurgence in the form of Machine Learning. The unfortunate side effect of this line of enquiry has been to promote that human intelligence as a computable entity external to humans. A stretch on the same line of thought is that human cognition can be essentially replicated outside human body. The consequences of disembodied cognition is that human language communication and understanding can be explained as a disembodied phenomenon.

My position is simple – as long as language is seen as a rewriting procedure based on production rules and formalized as a grammar, the computational approaches are sound. Based on description of language syntax, we can design systems that are capable of generating or parsing strings and expressions of a certain grammar. Noam Chomsky’s seminal work on Formal Grammars has led to significant advances on computational approaches, complexity classes and applicability to human language based interactions. The place where things get murky is when comprehension of human language is investigated as a computable entity which is essentially self-contained. In other words, when can I unambiguously say that a sentence is meaningful whether I understand it or not?

Note that I slipped in two words that I have been exploring in the context of language comprehension – meaningful and understanding. Meaning is the limiting condition when gist of a lingual expression is established in accordance with the beliefs and knowledge of the hearer. There can be several meanings to a syntactically correct sentence each of which depends on the level of isolation of that sentence from the context of utterance. Understanding can be seen as a self-reflective state that can be distinguished from meaning in the following sense – “I follow (as the gist or meaning) what you say but I don’t understand it”. In some way my self-realization of my understanding is to establish a coherence between the speaker’s intent, my current listening context and the current expression.

One challenge in exploring human language comprehension is the fact that language is less an individual contribution and more a cultural artifact that establishes a means to communicate effectively. The origin of human language is shrouded in mystery given that all human languages have spoken/sign forms while few of them have written representations. Although the record of written language goes only so far, classical languages like Sanskrit and Latin have been handed down several generations and are rather well preserved in a spoken and written form.

In my quest to establish key tenets of language that can help us relook at language as an embodied phenomenon led me to investigate the amount of scholarship created around Sanskrit as a classical language, its etymology, grammar and semantics. It is widely accepted that Pāṇini’s Aṣṭādhyāyī is a monumental effort that has, on one side, unified Vedic and Conventional Sanskrit of that age, and, on the other side, is an extensible system that describes how Sanskrit Syntax works. As several Sanskrit scholars have observed, Pāṇini has created a comprehensive treatise around sounds, phonemes, syllables, words and sentences.

One authoritative commentary on Pāṇini’s Aṣṭādhyāyī is Patañjali Mahābhāṣya which is seen as the fountainhead of all things related Sanskrit language and comprehension. Very early Patañjali makes an astute observation that validity of a word is eventually proven by its usage. Thus, the final completeness or correctness of any human language effort is established by people who eventually use it in their daily life. This is quite consistent with what we currently refer to as a living language – a language that is sustained by a community of people to run their daily lives efficiently. There could be shortcomings or advantages as we compare language expressiveness with regards to latest words or usages in other languages. These comparative points are useful but are not limitations to establish a language as a living language capable of sustaining a culture and worldly transactions.

Taking cue from Patañjali’s observation on validity of a word being proven by usage puts us in a strange situation. On one side there is clear evidence that human brain does some kind of processing to communicate using human language, and on the other side, the feedback from the community of users defines how meaning is established for the terms used in the communication. Another way to interpret this observation leads to the classic dilemma of whether meaning is subjective or objective or both. Human language lands us right in the middle of subjective and objective experiences.

In this essay series, I wish to focus on how lexical semantics – or meaning of words – can be established within an embodied context. So, I start with a self-evident axiom:

Axiom1: Human Language is an expression of humans.

Consequences of Axiom 1 are as follows:

  • Human language, is therefore, embodied in human body.
  • A finer granular exploration can scope human language as a capability of human brain which can use the embodied facilities of being housed in a human body to compute the various aspects of human language.
  • Focus is on processing capabilities of human body as an entire system responding to human language generation and comprehension.

Correspondingly, some points not in scope of my investigations are as follows.

  • I am less concerned whether machines can achieve human language competency or pass Turing’s test for intelligence.
  • It is conceivable that if suitable morphisms are established between human body as the computational system and any other disembodied system, human language could be realized in a way quite close to what we currently see as representatives of human language. How far this can be achieved can be seen only after sufficient progress has been made on current topic of embodied linguistics.

We are in a golden age of neuroscience and machine learning. At some level of discourse we agree that technology serves humans delivering efficiency and improving quality of life on the planet. The more we wish to see disembodied technology becoming more “human” the greater is the necessity to understand human language as one fundamental capability that makes us human.

In my next essay I shall present a way to think about meaning working bottom-up from the senses. I believe such a constructionist approach will be suitable in this context to ensure that the embodied nature of linguistics is not seen as an afterthought but is indeed a primary principle of how human languages work.

Social recommendation is a great slave but a bad master.

May 8, 2017

Lunch conversation on a lazy Sunday is what I look forward to. This is the time when we all get to see each other in an unencumbered way – without pressures that life throws at us. It is my time to reconnect and bond with people in general and family in particular. Yes, I like my Sunday family time.

As usual we had our regular drama of getting together to the lunch table to get started. Unwinding after Sunday Morning activities takes some time and effort – family effort. Anyways here we are, at the lunch table discussing about whether having given sweet mangoes before lunch was a good idea or not. The other contender was cucumber. The point raised by my honey was that cucumber would have kept hunger at bay but mangoes killed appetite of my children. Two of them – one high-schooler and one soon to-be middle-schooler.

With that context, lunch moves forward. My younger one is tooling with his food. Older brother has gotten up and moved on. Wife says she needs to take rest now – the morning was rather full for her. So, I am left to provide company to my to-be middle-schooler. Well, I don’t miss any teaching moment to let my “middle-schooler” know what middle-school means for him ;)!

Something fascinates my son about fire. Interestingly we got to it in a rather interesting way – about how garbage disposers use blades to chop and not a heater to burn. When he started solving the make-garbage-burn problem, I asked him why did he think about burning waste. And he told me that burning, or fire, is “cool”. Oops, my antenna went up!

I asked him to describe what he saw that fascinated him and he told me about a few science experiments on YouTube. Some dude apparently covered his bare hand with hand sanitizer and set it on fire! It was rather clear that someone was trying to look cool on YouTube by doing stunts that can be mis-interpreted as “science” and so repeatable! While both my sons were fascinated by the fact that hand-sanitizer could burn (due to alcohol as my older one confirmed), they dint realize that the person was doing the stunt with his bare hands.

So, how did my to-be middle-schooler find the video? What safety conditions were spoken about? I probed. The answers were not good :(!! The video came up as “related videos” on YouTube.That’s when it hit me – my son was being led by a recommendation engine that has evolved due to social choices. And there was no way to judge whether the recommendation aligned with his age or value system. It was just way too cool for my son to get impressed.

And that is when an old adage flashed into my working memory – “electricity is a good slave but a bad master”. In today’s information age, “electricity” of the 1950’s is “recommendation engines” of the 2010s.  These recommendation engines are curated over large number of social responses – from simple like/re-tweet response to browsing history, transaction history, categories of behavior, recommended due to others’ behaviors, etc. As I heard at a recent seminar on Sense-making and Making Sense,  it just takes 11 likes (sample size) to predict a social personality on par with co-workers.  Of course, a spouse level intimacy needs 200+ likes. This is the current state of art of social personality determination from popular social sites.

Eventually recommendation engines use social categories to compute suggestions. The quality of search has improved to a point that I rarely have to go to my second page because today I get several alternates to explore within the first page. I recall that I used to scan 5+ pages more often in the past than now. These days I barely get past 2nd page by when I change my search criteria. And most often, my search expectations are rather precise.

However, my digital trail is tangible data for recommendation engines. Curated over billions of people, statistics does throw up interesting possibilities to cater to an avid searcher. Now I am getting tuned to the recommendation engine’s behavior. From an unknown entity, I am quickly a social category that has a significant satisfaction score for a certain class of search results. Overtime, the evaluation against my value system gets less rigorous since the results are rather close to what I expect, or else, what I am searching needs to be fine tuned.

On the other hand, for a young mind that is still finding its way in the cyber-world, social recommendations are great escapes into fantasy! Without the compass of values, the outcome becomes unpredictable very fast.

With so much thought going on, I had to act fast. I seized on the teaching moment and told my to-be middle-schooler about what it means to evaluate a YouTube video that can give him confidence as being within the boundaries of his emerging value system. And it had to be in a form that is easy to evaluate – my child’s identity is still evolving and this means that anything that is not efficient or not easy or both, he will not do. My response was a “buddy-system”. As part of Boy Scouting, one rule the Scout Masters enforce early on is to always work with a buddy. Even to go to the rest-room, a scout needs a buddy.

Internet exploration needs a buddy too. My first advise on being safe in an internet environment to my to-be middle-schooler is to do anything on the internet with a buddy – a peer, responsible caring adults at school (I believe in the teachers, otherwise my child would be studying elsewhere), elder brother, me, anyone who has actually explored similar topics and confirm that the topic is ok. That, I told my son, will make his argument defensible when I ask him what he is looking into.

A young mind is not easily convinced – so after several patient exchanges of what-if’s we finally agreed that the situation is not as rosy as it looks – social recommendations are  good slaves but bad masters. While he can get to some new information reasonably fast,  he has to constantly evaluate the value/quality of the result before pursuing that.

After a while, he shared that all this talk is making him rather anxious. That eased me up a bit – a to-be middle-schooler must be anxious on how internet is out to get him. Having seen different kinds of kids at school, my son has a fair understanding on what group psychology is all about. At least the elementary school level of children groups – group leader, sub-groups, moving across groups, keeping stable groups, behavior models of different groups vis-a-vis the group leader, one or many leaders, and so on. (Actually, it is rather fascinating to talk to my child about elementary school groups – how they form, change, sustain or disperse).

All this effort did break a sweat, I tell you. It is hard enough unraveling true from fake news these days. If we have to compound that with social recommendations that are trying to become friends of our children, then our scope of engagement increases from physical context to internet context. From what I have heard so far, the suck-out to the other end happens very very fast. Every few hours there are some pockets of biased opinions that surface out in the internet – some sustain as ideas or memes generating feverish supporters while others just remain as digital memory.

Recommendation engines are logic driven – there is no emotional content. And logic always does not mean rational – at least not in a human sense. At best, a recommendation is a close approximation of what a category of people best prefer as their next action in the internet. The only value is statistical convergence over large numbers. This bland and mechanistic determination of what I am to be served can be a good thing or a bad thing depending on what level of control I let go or offer. The laziness of thought a recommendation engine advocates is like reading glasses – recommendation engines fix the weak muscles of thinking by serving what many others think as useful. Overtime thinking muscles just get more lazy and become subservient to the recommendations.  This flip from being a master running a tool to becoming a conforming data point is the risk I see.

There is clearly an inner threat to our cyber-society that warps our value system and presents it as Machine Learning and recommendations. Does the Machine have a heart? What does It learn and regurgitate back to me? How much can I believe that what I get back is what is coherent with human good?

What is good?

 

Evolution and Computation

October 3, 2016

Charles Darwin’s celebrated Evolution Theory has had its fair share of debate – both for and against. In particular, “survival of the fittest” that describes natural selection has been criticized for not being precise enough. Well, this is not the only concern expressed in that theory. However, to my knowledge, Evolution Theory is the only defense against Intelligent Design where there is an invisible Designer of Destiny. (This view point of an Intelligent Designer has been taken to an absurd comedy by Douglas Adams in his science fiction series”Hitchhiker’s guide to the Galaxy”!).

So, is evolution formalizable as a theory that can be actively debated? Surprising answer to this is Yes! Leslie Valiant’s Probably Approximately Correct (PAC) approach to learning and evolution has been a wonderful eye-opener for me.

The points that Leslie Valiant highlights as the challenges of Charles Darwin’s evolution theory are:

  • Rate of evolution: It is unclear how many generations would be required to evolve complex mechanisms from simple one. For example, think of eye!
  • Maintaining an evolved feature: The environment that an organism thrives in is constantly evolving – with different levels of change including immediate, daily, seasonal, and catastrophic events.

Even more difficult to explain is the amount of biological variations we see that seem to have occurred in a relatively short period of time! So, what are the mechanisms that drive evolution?

Leslie Valiant approaches evolution as a computational learning scenario. Just as learning proceeds without a need for an intelligent designer – purely based on the computational and statistical features of learning context – similarly, Valiant proposes a framework to define Evolvability in learnability terms. With his characteristic clarity, Leslie Valiant uses precise terms to navigate the murky waters of Darwin’s Evolution Theory:

  • Performance: This is the measure of fitness of a certain organism.
  • Target:  Casting Evolution as a special class of Learnability allows Valiant to set up a target for evolution just as there is a target for learning. The target of evolution is simply higher performance.
  • Ideal Function: The elusive but existential function that can specify the most beneficial course of action for any evolving entity(or species) at any instant in any specific environment.
  • Evolvable Target Pursuit: The course of evolution is guided by the succession of opportunities that arise as the species and the environment change.
  • Evolutionary Convergence: When target if accessible and beneficial, convergence will occur at a predictable, perhaps rapid rate determined by the pace of the evolutionary algorithm. Since the system is in the state of flux, emergence of new beneficial targets is considered part of the convergence process. The convergence, however, is constrained by the environment.

Having set up these definitions, Valiant differentiates Evolutionary Algorithms from PAC Learning Algorithms as follows:

  • Evolution can proceed independent of experience: Darwinian evolution postulates that the sequence of experiences should have minimal influence of genetic variation. Instead, the role of the experience is to compare the performance of various offsprings.
  • Evolution has to succeed from a fixed starting point: The process of PAC learning is relatively independent of initial hypothesis. However, biologically, the possibility of reinitializing to a start point that is evolutionarily convenient is not realistic! It seems that biological entities cannot afford arbitrarily large decrease in performance. It is believed that mutations that have been adopted have been mostly beneficial or at least close to neutral when adopted.

By setting up a goal to uncover any evolution algorithm that nature might be using anywhere  in the universe, Valiant studies the class of functions that can support evolution.

  • Arbitrary but fixed starting point: The point at which a certain evolution starts is taken as random. Once the point is chosen, there is no freedom to change it based on convenience to long-term evolution.
  • Modest size population: The set of variation mechanisms (for e.g. living systems offer asexual, sexual and lateral mechanisms for gene pool to evolve) are expressed in a limited size population.
  • Modest number of generations: The evolutionary progress should be realized in rather smaller number of generations rather than larger.
  • Computational cost of creating variants must be polynomial: There is limited space and time available in the universe for organisms to evolve.

Having set up the constraints, Valiant searches for possible evolvable functions that are also robust. Some observations made are:

  • Boolean functions can be viewed as limited set of options
  • Success in discovering evolvable boolean functions for all distributions seems low.
  • Generalizing to real-valued functions and convex loss-functions seems to show promise

So where have to got so far? Leslie Valiant’s PAC based Evolution Algorithms offer us a concrete way to reason about computability of evolution and its limits. Valiant leaves us with some interesting thoughts:

  • Why does competition increase functionality?
  • How to account for biological circuits and their evolution?

Valiant leaves us with a feeling of awe that remains undiminished in his critical examination of Evolution as a Computational Process.

[This material has been compiled from Leslie Valiant’s book Probably Approximately Correct, Chapter 6. Mistakes in interpretation are entirely my own!]

 

Life is an arrangement

October 2, 2016

Is it amazing that the most on-the-face fact is the one that is most difficult to explain? Forget explain, we are not event close to knowing the origin of Life. So, what is Life anyway?

Sadhguru Jaggi Vasudev uses the term arrangement frequently –

So what if Life itself is an arrangement – a representation in a geometry?

There is another perspective to arrangement – one that is more closer to our physical experience –  Architecture. Christopher Alexander identifies 15 properties of wholeness. In his book series on Nature of Order, the first book that identifies the 15 properties is aptly titled “The Phenomenon of Life”. Christopher Alexander sees  geometric arrangements as the essence of life. What more, each of these properties is also an operator. Thus, the same 15 properties of wholeness also contribute to the unfolding of wholeness.

If a Mystic and an Architect agree, there must be something to it right?

Geometry has been fundamental in our understanding of physics. Without Albert Einstein’s conception of space-time continuum, I wonder where we would be in our understanding of physics. Space-time continuum and the impact of gravity on space-time is one of the outstanding achievements of human mind.

So, having said that, is Life only an arrangement? There is far more dynamism, expressiveness and responsiveness to Life that an arrangement seems to fall short. There is an essential “Life-ness” or Cit that is hard to explain. Maybe Life is too big a problem to attempt right now. Let us scale it down. How about consciousness? Or even better cognition. What is cognition?

Let us start from what we perceive. Immanuel Kant proposed two innate intuitions: space and time[link]. What can explain these intuitions in us given that any sense organ can at best finitely sample our environment in finite time? What leaves us with a feeling that there is a continuum of space or time? I am not sure whether we perceive time – a better way would be say we perceive the passage of time – or events. So, lets redefine our intuitions to space and events.

I posit that our intuitions of space and event are a result of Computation! Without computation, I don’t see a way in which a finite bunch of signals can be generalized to a seemingly infinite spread. Some “filling up” is happening. And that “filling up” is the result of a Computation.

But wait a moment, is it not a representation – a geometry that is being born here? Well, yes and no! The fact that there is computation does not always imply a representation. However, purely for economy of cognitive activity, representation is useful[link]. Already our brains are energy hogs. If we let every neural activity to be computed from first principles every time, that is all our neurons will ever have time for – from the time they come to action till the time they retire. So, representation follows computation as an outcome of cognitive optimization. And representation is as relevant as its subsequent use is. Representation is not an end in itself – it is a means to an end!

So, if Cognition has computation to supply its intuitions, what can we say about Life? Does computation – or more generally – information processing turn out to be more fundamental to Life than arrangement or geometry?

 

 

 

 

 

 

How does the ‘Self’ form?

April 28, 2012

In an interesting exploration on Model Thinking, the professor was speaking about how model thinking enables us to be better citizens of the world. The point is that in order to understand the world, we construct models. Any abstraction works on emphasizing some to get to the perceived essence. There is the famous quote that “all models are wrong, some are useful” which is very appropriate here. Therefore, one much quickly come to terms with the following:

  • to understand the worlds, each one of us makes a suitable approximation – a model – of the world.
  • the way this model is built defines how we understand and reason with what we perceive.
  • so, the way we transact with the world depends on our mental model of the world.

Thus, realizing that there is a model behind every reasoning approach is the primary step in expanding the frame on which knowledge occurs. Or, to boil it down to pedagogy, the education system should be aimed at delivering mature human thinking process that allows each individual to constantly reassess the frame of knowledge a person uses and expand it in the ‘right’ way.

While science has helped me to attribute a reality to a larger reproducible system of hypothesis testing, I am unable to bring that to my own personal behavior that includes rational thinking, emotional responses and non-verbal communications. To transact in the world, the constructions of science is cumbersome and essentially non-committal.   To me that is not good enough. There needs to be a guidance with a commitment to the process in such a way that at every step, “I am able to realize and expand on what I am”. One way to interpret this is about the values and ethics that science shies away from. To me that alone leads to a lack of substance in an educational system that relies only on science.

At the very core, science is also an edifice of beliefs. There are rational arguments that make us think that the system is sound. As I am exploring Advaita Vedanta Philosphy, I am awestruck at the reasoning and illustrations. This philosophy (darshana) start with ‘direct human experience’ as a start and builds a sound and consistent system that encompasses ‘who am I’ to ‘what is this universe’ and ‘how am I related to this Universe’. The way all of this is laid out appeals to my logical rational thinking as much as science does. The commonality in the search for the primary axioms of knowledge that can help me better understand me seems to be the common goal of both science and spirituality. So, why then this artificial divide? In my opinion, once the person has the experience of  ‘I’ and ‘You’, the person is ready to start self-inquiry.

I have two kids of my own and have watched with interest as they have grown. The first smile, the first step, the first sentence, the formation of the personality. Even before I realize, I have an independent individual who has a own personality. And here is where my question really stands out – if the way to create a mature individual is to let them have a way to assess and re-assess their frame of knowledge, how to set up the education system that builds the drive towards the need to expand one’s own frame of knowledge? Is this related to the process through with the ‘Self’ forms?

What I am thinking is an educational system that

  • follows the child
  • nudges towards newer learning approaches
  • expands the ‘vocabulary of interactions’ across all sense systems
  • drives home the understanding that there is a mental map that is driving their understanding and responses
  • and, finally, leaves them with tools to expand this internal model all the time

The educational system should be closely aligned to the process of how the self forms.  Then there is no ‘separate model/process’  learning since that would create a level of indirection and translation that defeats the whole purpose.

We have with us a multitude of approaches to child education – each rooted in a different fundamental philosophy. I am thinking of the most comprehensive collection of approaches that includes the religious ones too. All of these approaches work based on a their model of how a child can be best guided to work in the world at large and become a honest and mature citizen of the world.  The fact that there are so many, and that new ones are being discovered, indicates to me that world at large is still debating from multiple view points and the common ground is still unclear.

While science has evolved, spirituality has evolved, I am unclear how evolved pedagogy is. We need more work as we understand our selves better to formulate an educational system as outlined above. Understanding how ‘Self’ forms in an individual is a vital key to this discovery.

Gestation period of a Human Child is 5 years

April 28, 2012

While most animals are complete and ready to walk and move from the instant they are born, young human babies are at best half-ready to be part of the human society. The first 5 years takes the child from a drink-sleep-excrete cycle to a language understanding socially aware reasonably independent unique human being. Science informs us that the brain is getting richer in its internal wiring and the young mind has, largely based on external inputs and observations, defined a reasonably consistent view of the world that is sufficient for the child to be considered equivalent to a new born animal. One could say that the gestation of a human baby is not 9 months but more appropriately 5 years.

Implication? Early Childhood Education can have deep and profound impact on what happens between 0-5 years. While most parents feel guilty that they have missed being good parents, teachers and pedagogy has also evolved to a point where the parent is further confounded by the choices in teaching methodologies, teacher’s expertise and experience, and the institution’s view of functioning.

To me it all goes back to where should the emphasis be – the innate capacity of the child or the nurturing environment where the child develops. Well, there is a basic minimum that has cover both. No doubt about that. The various educational philosophies that draw upon the science, observation and personal biases of the proponents are seeking to give an answer for the basic minimum. My learnings in Early Childhood Education convinces me that there has been a lot of ground that has been covered in the Western Teaching Styles. While there is a conscious and clear acceptance that the information is more from Western than Eastern Societies, the overall lay of the land is rather well defined.

While religion oriented education is mentioned, there is an edge when this is being spoken about in the Western Society. This goes back to the deep divide between Science and Religion in the Western Society: Science had to literally fight and break out of the clutches of religion – remember Galileo? The deep mistrust of either parties has led to a deeper cultural divide in the educational philosophies.

Well, the parent is naturally torn between what resonates and what choices are available. Eventually, it seems, there is no right answer.

Will ‘Smart Computing’ make me feel ‘less dumb’?

January 13, 2011

So we are in the era of ‘Smart Computing’. This follows the era of Network Computing which followed Personal Computing which followed MainFrame Computing as introduced by Forrester Consultants. The key characteristic is how the advances in technology improve on existing theme and introduce new themes in business.

As I was reflecting on this nomenclature of the eras, two thoughts struck me:

First, what is getting ‘smart’? While older systems crumpled and collapsed in the face of uncertainty, the robustness was built by making investments into fail-safe technology. Today, the same level of robustness and accessibility is achievable at much lower cost due to the progressive evolution of information manipulation techniques completed by fast processing power. With low cost hardware, racks of machines are assembled into high availability data centers. This is a stupendous effort and is truly a information technology marvel. Of course, we should forget that we are standing on the shoulders of giants – every where as we evolve.

Second, will these ‘smart’ systems make me feel less dumb? The greatest challenge that human computer interactions – or extending it human ‘smart-technology’ interactions – have is the blunt and disarming feedback that follows what seems to be an innocuous error. The software has become so sophisticated that it wants to encompass higher and higher segments of potential users. At the same time, there is a conscious acceptance that every individual is unique having own tastes and styles of doing work. While this sensitivity to human individuality is touching and sincere, the eventual face of the software seems far from the original intent. Complex configuration, lack of uniformity of experience and hidden surprises is the norm. People have resigned to learn and relearn doing the same thing over and over again as softwares that solve the same problem evolves. It is a tribute to the flexibility of the human mind that more and more people are getting to use and even become successful on these systems. As a famous person said, the information manipulation is so native to intelligence that 1 in 50 will be naturally gravitate and become successful in information technology. I am surprised that it is still 1 in 50 and not 1 in 10 or even lesser. This is where the essential complexity of these systems lie – there are possibly other aspects of human interaction and styles that are so successful in a human-human interaction that are just not suitably replicable in a human-machine interaction.

So, to answer the title question of this post, I think the ‘smart’ attribute is condescending towards technology. Having seen the main frames, one can wake up to see how different the world is today with all the technology advances. Even from an interaction side there is a lot of progress – who would have imagined an application that shows a location map that has multiple overlays including traffic and satellite view? Nevertheless, there still seems to exist a large set of failures in human-machine interactions.

Why so? I think we have the right combination of technology, intelligence and where-with-all to make the Mother of Demos come alive today. The urgency to make money in the surreal early bird gets the worm is making careful design so rare like a whiff of fresh air in the city.

When can I be myself and not be humiliated by a system? That is the day I would consider the technology is smart – until then it is giving a interesting catchy name to a marginally evolved information manipulation system or system of systems.