I’m surprised at and thrilled by a very interesting article in the July-August 2012 issue of American Scientist. In his article, Prof. Dominic W. Massaro claims that it is possible for children to learn reading at a very young age, without explicit instruction. According to Massaro, with the help of latest advancements in cognitive science, computer science and mobile technology, the children can be immersed in an augmented environment in which they can acquire literacy intuitively. The full text of the article, “Acquiring Literacy Naturally” is currently available at http://mambo.ucsc.edu/wp-content/uploads/2012/06/2012-07MassaroFinal2.pdf.

Some highlights that really drew my attention:

“Notwithstanding the intuitive primacy of spoken language, I propose that once an appropriate form of written text is meaningfully associated with children’s experience early in life, reading will be learned inductively with ease and with no significant negative consequences. As described by John Shea in this magazine, “there are no known populations of Homo sapiens with biologically constrained capacities for behavioral variability” (March–April 2011). I envision a physical system, called Technology Assisted Reading Acquisition (TARA), to provide the opportunity to test this hypothesis. TARA exploits recent developments in behavioral and brain science and technology, which are rapidly evolving to make natural reading acquisition possible before formal schooling begins. In one instantiation (Figure 3), TARA would automatically recognize a caregiver’s speech and display a child-appropriate written transcription.”

Technology Assisted Reading Acquisition (TARA) implemented on a digital tablet automatically recognizes an adult’s utterance using automated speech-to-text recognition. In these examples, the adult’s comments are recognized and the digital tablet displays some of the words in high definition to the child. (Photographs courtesy of the author.)

“Early Motor and Visual Capabilities
There is less need today, relative to just a few years ago, to instruct an audience about the sophisticated abilities of infants from their birth through their first years of life. Andrew Meltzoff of the University of Washington was the first to show that infants can imitate facial movements, and there are now many delightful variations of infants’ imitative behaviors on the Internet. The learning of baby signs is also a form of imitation learning, and Linda Acredolo and Susan Goodwyn at the University of California, Davis, systematically documented the successful learning of baby signs in parallel with speech.

Well-documented research and measurement of infants’ vision development suggest that infants during the first year of life have the capacity to perceive written language. Some of the vision milestones for infants are the perception of color by one month, focusing ability at two months, eye coordination and tracking at three months, depth perception at four months, and object and face recognition at five months. Infants’ visual acuity also improves dramatically from birth onward, reaching close to adult levels by eight months of age. It appears that infants do have the motor and visual capabilities to acquire a visual language, and it is possible that they could acquire literacy naturally.”

“Infants clearly have the capacity to perceive, process and learn semantic components in spoken language. Given the argument for infants being equipped for learning to read spontaneously, why haven’t they done so? Spoken language is present in a child’s environment continuously from birth and it is learned inductively. My answer must be that written language is not present often enough or saliently enough in the growing child’s world to allow inductive learning. Written language should also be acquired if, like speech, it is presented often enough and is perceptible in socially meaningful contexts. No child has yet had this opportunity, but current technology might enable this learning as easily in written language as it is in spoken language.”