28 October 2016

Beyond Ender: Amplified Intelligence and the Age of the School Wars

October 23, 2016

Beyond Ender: Amplified Intelligence and the Age of the School Wars

Erik Richardson

Innovation. Black Swans. From Welsh longbowmen to the Enigma machines to the Stuxnet virus. The race to make the next giant leap first has always been critical, and in an era when networks and cybersystems can implement and advantage on a global scale within nanoseconds, the risk of being second is worse than ever. This is a first sketch of an initiative that will help to make sure we keep on being those precious few nanoseconds ahead of our opponents.

In the same way that the nuclear arms race depended on the control of plutonium and enrichment facilities, we must look to our supply chain. We must look to the research and development labs where our most powerful weapons are being programmed, tweaked, and tested. In short, we must look to the classrooms of our primary education system.

Given the kind of mental focus and agility that will be required to successfully pilot centaur-like interface systems, as one example, and any number of black swan innovations that we are not yet able to even foresee, what are the foundational skills and meta-skills teachers should be building and how?

Here is laid out a brief sketch in broad strokes of an initiative to make sure we have, effectively, the most-enriched plutonium. The first section will lay out a brief case for why human-technology integration will be the fastest, best advantage. The second section will offer some particularly promising examples that would serve as a starting point for improving the enrichment of our neural plutonium. Then the third section will offer a suggested starting point for what we would need to do to leverage the potential outlined in the first two sections.

The Central Importance of Interface Capacity: Why Centaurs Will Always Beat Pure AI


We could easily spend a whole year of conferences arguing about whether AI systems will ever be able to completely imitate humans and if so, how much time and how many piles upon piles of resources it would take to create and sustain such systems. In the time it takes to even decide IF computers could catch up to where we were, the best human minds should have been sprinting ahead at full speed expanding the gap that would have to be closed, and leveraging every tool they can to jam that gap open. Let us consider a case for how much bigger and harder to close that gap already is. The more time we have on our side at the outset, the greater the advantage of the interface strategy.

We Can Be More Irrational Than AI’s Can

It is a fairly common element of discussions about the capability of humans versus computers that we have a capacity to engage in self-referential evaluations of the rule sets governing our various decisions and patterns of behavior. Often the focus of the discussion then goes into some detail examining the extent to which computers are just beginning to imitate that behavior and how far it will proceed. For the sake of the current argument, however, let us bracket that aspect for now. In fact, let us even allow that within x number of years, computers might be capable of replacing any of the given rules that make up part of its strategic decision process with another rule that has developed over the course of its learning and feedback cycles. For the time being, we still hold an edge here. While machines can beat us at narrow and rigid activities like chess or Jeopardy, they are confounded by projects in which things like bluffing or deceit (Starcraft[i]) come into play. The same is still currently true for more creative endeavors. Whereas we can easily imagine being beaten in shot put or the 100-meter dash, endeavors like figure skating, where the contestants seek to surprise the judges with creativity still tilt in our favor.[ii]

However, the one aspect of this behavior worth holding onto is our ability to engage in additional types of rule-altering behavior such as a merely temporary suspension of a rule or subset of rules and our ability to continue operating under contradictory decision rules for a given amount of time. While economic models still rise on the basis of assumptions like rational buyers and sellers acting on perfect (or near-perfect) information, real stock markets are riddled with any number of irrational behaviors like fear, over-confidence, and greed. As a result, even the best algorithms are unable to outperform talented traders and crowd-sourced predictions. [iii]

While there is not adequate room to develop the following case in full detail in the current consideration, let us at least set out the basic framework as it will play a potentially explanatory, and sufficient as a placeholder in a subsequent argument. Part of the nature of much of our seemingly irrational behavior has to do with the ability to move, limit, or suspend various of our decision rules in relation to other, higher-order rules. Hopefully a couple simple examples will serve to exemplify the idea. In one case we might violate the operating principles by which we manage our everyday behavior for the sake of social propriety. In another case we might re-calibrate our valuation of a friendship as a result of a political disagreement. In such situations we are not merely vacillating between “treat Sarah as a friend” and “do not treat her as a friend,” but, rather, over-riding a lower-order rule for the sake of a higher-order rule. If we can agree that religion is often held to be one of the highest-order rule sets under which we operate, and we call to mind any of the numerous examples where people seem to engage in highly irrational behavior in accordance with their religious beliefs, we have the basic idea. The upshot of this point is that until machines are capable of moving up the hierarchy of rule-sets as far and as fully as we can, the ability to out-predict our behavior will remain limited.

We Can Make Leaps That AI’s Can’t – Usually by Analogy

Ken Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern’s McCormick School of Engineering, and a team of collaborators are developing a model that could allow computers to reason more like humans. Their model is called the structure-mapping engine (SME), and is capable of analogical problem solving. It solves a problem, like in physics, by looking for other solved problems that seem sufficiently similar to the one at hand.[iv] While the article goes on to describe how well the new model is able to imitate human problem-solving in narrow and rigidly defined spaces like a physics textbook, and posits an ability to do so in the case of moral problem-solving as well, we should be careful not to be pulled along by the optimism of the researchers. This holds true even if we were to agree with Forbus, et al, that this is the central concern for AI that can fully mimic human thinking.

One of the areas where humans still seem to have a great capacity to outperform computer models is in the area of inventiveness and innovation of a profound nature. This is to signify the difference between say, generating simple variations on a given molecular structure to create new materials, and the kinds of innovation reflected in black swan events, a well-established feature of warfare. As one author characterized it, the spark of a significant analogy involves mapping elements from one object or situation in one domain to another in a rather distant domain.[v] Because of the distance between all possible domains is such a large sample space, the possible comparisons cannot be mapped out to a systematic algorithm that completes in feasible timeframes. In fact, because of the sheer number of objects in our experience and the bottomless number of sets into which each could be sorted and compared based on some subset of its features is virtually limitless.[vi] This is before we even begin to compound the problem by asking the algorithm to evaluate the significance of each of the possible comparisons. Now, that being said, even if we were to allow that computers could begin to do this at even the level of, say a middle-grade elementary student (or higher), the problem is compounded when we combine this with a feature of collaborative complexity that still keeps humans well in front of reasonably foreseeable AI systems.

The One and The Many

Now we come to an interesting compounding of the problem as set out thus far. It is well to remind ourselves at this point that the goal at hand is one of sparking lively thought and discussion, not to advance any particular vested research interests or project by the author.

One of the things that humans can do that computers cannot is to network and collaborate with dissimilar programs and operating systems. We are capable not only of running incompatible rules within a given timeframe, but as we grow up, we become capable of running simulations of whole other people. “What would Jesus do?” “Dad would have loved this.” And so on. This is foundational to our ability to reconcile and collaborate among team-members of diverse backgrounds and areas of expertise—each with a vast capacity to generate inventive algorithms in relation to any of the shared problems with which the team is confronted. For lack of a better analogy, a group is like an enigma machine with other enigma machines feeding into it, with each team-member representing a different set of dials that feed into one of the main machine’s dials. On any given problem on any given day, which order the team might rank the inputs of the different members, which of them might make inventive leaps of analogy based on someone else’s input, and be the recipient of that in turn, creates an incomputable value space exponentially beyond that present in the case of a single inventive decision-maker, as considered above. Because a computer cannot adequately imitate one of us, it also cannot reach the still-further step of adequately imitating a whole team of us.

This one dimension of the problem deserves to be unfolded in more careful detail, and will be taken up in subsequent projects. To tie it back to the problem at hand, it is enough to allow that the extent to which you agree or disagree with this line of thinking is merely a matter of how far ahead of computers you think we are at present and whether that gap can ever be closed. The conclusion at hand is that given the distance yet to travel before a computer could even be said to have “religious” or “political” beliefs, the cost to get them that far along, and the risk of designing a machine with sufficient rule-questioning capacity to achieve human-scale freedom to go AWOL, to disobey orders, or to defect to the enemy, we could assuredly better use that same vast quantity of money, time, and resources focused on technology that amplifies and augments our already superior capability. That is to say, why would we spend the resources and time catching up to where we already are and then pushing farther ahead instead of launching ahead from here?

Augmenting the Interface Capacity of the Human Mind

With this brief look at the current level of AI attempts to imitate humans in mind, let us turn to examine some of the research that provides promise of pushing those same human minds even further ahead.

Meditation

One of the most promising areas, both on its own and with regard to its potential to amplify any of the other methodologies being explored, is that of medication. This ancient set of techniques has been shown to actually change the structure and wiring of the brain to improve things like attention span and sensory processing. [vii] By strengthening areas like the pre-frontal cortex practitioners gain the ability to modulate emotional responses, reducing stress and anxiety responses to pain and negative stimuli, and being more objective.[viii] What is more, regular, extended meditation practice has been found to increase gray matter volume in the areas of the brain that govern working memory and decision making. Of particular interest is the ability to prevent the normal decline in these areas that happens with age.[ix]

Running

As an interesting counterbalance to sitting still, much current research is also being done on the lasting brain benefits of sustained aerobic running. A strong and growing body of science is showing that we can increase the growth of new neurons in the hippocampus. And one Harvard professor of psychiatry referred to it as Miracle-Gro for your brain. Like meditation, it also allows for reduced stress and anxiety.[x] This growth in the hippocampal region of the brain builds the capacity for learning and remembering new material, thus amplifying the rate and volume of knowledge a given runner can bring to bear on different topics.

Modifications to Diet

In addition to physical training, another promising area of research that is emerging has to do with fundamental changes to not just the content of our diets, but the pattern and volume of our eating itself.

Calorie Reduction

In addition to various positive effects regarding longevity and disease resistance, calorie reduction has been shown to provide a significant impact to neural plasticity and neurogenesis, prompting the production of new brain cells as well as advancing the adaptability of the brain.[xi]

Intermittent Fasting

Beyond the particular benefits of overall calorie reduction, there also seem to be interesting impacts from the use of intermittent fasting—regularly skipping eating for a whole day. Among the benefits shown to result are improved cognitive function, increased stress resistance, and enhanced neurotrophic factors.[xii] Other research has shown a resistance to disease and damage as well as improved regeneration of the immune system. [xiii]

Taurine Supplementation

Another potential change to our dietary practices involves making sure we get plenty of taurine, an amino acid. It has been shown in recent studies to have biochemical properties that promote new brain cell formation. The research has shown that taurine triggers growth of new brain cells in the hippocampus, the area of the brain most concerned with memory.[xiv] It also appears to increase the electrical activity (signaling ability) in nerve cells.[xv]

Other Dietary

There has also been research support for the impact on the brain and learning abilities resulting from dietary changes such as cutting sugar and alcohol, cutting down certain kinds of fats, and increasing intake of resveratrol.

Chess

There have been a number of studies on the cognitive benefits—both analytical and creative—from chess instruction. This bears more research and development, but a recent meta-study on the subject suggests some significant impact. That study was dealing specifically with transfer to other academic areas, and could be expected to be somewhat higher when mapped specifically to strategic conflict exercises and scenarios. (Do the benefits of chess instruction transfer to academic and cognitive skills? A meta-analysis Giovanni Sala, , Fernand Gobetaa Educational Research Review Volume 18, May 2016, Pages 46–57)

It is worth noting here that there are also a number of alleged ways to boost intelligence that have fallen short under more rigorous studies or attempts to replicate—such as n-back memory games.[xvi] Part of the forward-going research and development would involve testing out additional possible ways and means for amplifying the growth of new brain cells and the amplification of learning abilities utilizing those new cell growth rates.

Synergy Effects and Amplification

These and other research findings show interesting promise in the development of increased capabilities of military personnel—both with respect to raw processing and analysis and to the acceleration of integrated technology, neural implants, computer-interface systems, and so on. This is just scratching the tip of the iceberg, however, as studies are not yet being done to explore how the combination of these approaches might result in an enhancement of the effects. We need not go on at great length laying out every one of the possible combinations and synergies, but let us merely point to a couple. For one, how could the pain management effects of enhanced meditation techniques allow for a more intense running regimen so as to reap the maximum cell growth increase? For another, how much more benefit will a trainee get from chess if they have amped up learning cycles by running before each days chess training session? Research shows that this is one of the particular effects of running—an increase in learning ability afterward. What about the benefits to chess if the person meditates and runs on a fasting day? How much faster does the learning and brain-growth cycle accelerate?

Implications for the Future

Some of the potential implications of a focused development and implementation of these research findings, as well as further exploration down these and related paths, in terms of a more intelligent military force should be clear, and the feedback cycle to amplify those will be addressed momentarily. In addition, however, this feedback cycle will also include the implications and applications directly in relation to the unfolding of a high-tech program of research and development along the lines of technological enhancement of personnel.

As but one example of the concrete applications of such brain training we can point to the gradual loss of reflection and emotional connection that emerges as the speed and complexity of visual information being processed increase. An example of this is seen in such generic areas as our rising use of internet (Carr, 2011 p.133 The Shallows: How the Internet Is Changing the Way We Think, Read, and Remember), but similar transformation of our neural processing patterns and routines need be offset, as suggested when discussing the similar effect on drone pilots over time.[xvii] We also see very similar issues at play in the visual-spatial models for controlling certain neuroprosthetics.[xviii]

Stage I

In stage one, we would begin to implement testing of these neural enhancement programs at the stage of training for select personnel before beginning to roll elements of it out to general training practices—like basic training.

Stage II

As a track record of successes begins to accumulate, we do two things: one is to put some of the more successful candidates back into the system as researchers developing and implementing additional amplification methodologies for future trainees, and the second is to move other candidates into the program to add technological interfaces and implants that would further amplify abilities, as well as provide data on developing specific methodologies geared toward increasing the capacity to accommodate such technological interventions.

Stage III

If we are going to get serious about unfolding the potential these methodologies present, we have to begin expanding the program downward in age to the middle-school level where the developing adolescent brain begins a flurry of growth and pruning extending up into the early twenties. This period is vital to establish pathways and patterns of processing that cannot be substituted for later.

A digestible proposal in this regard would be to roll out the programs at military-style academies first, and establish data before proposing any implementations in other types of schools (private, public, parochial).

As suggested at the outset, we are entering an era where we need not burn hours a day for years loading content into the heads of our children than can be downloaded at the touch of a button. We need to shift to developing the capacity to process those volumes of information and to operate in complex, tech-mediated landscapes. Like the Welsh longbowmen, or a major-league pitcher, the peculiar focus of development needed to reach our potential must be cultivated from early on.

Stage IV

Here, as with the training of explicitly military personnel, the important transition will be to turn the successful students coming out of these programs into teachers and researchers feeding back into the educational cycle.

Stage V

The last stage would be to add non-military technological interventions into the educational stream. Here, again, we see an analogous cycle to that suggested in the case of military training, where we now have tech-augmented teachers working in classrooms, and the students are using tech interfaces to amplify their learning and processing abilities.

Conclusion

This feels like the merest sketch to outline a direction of research and development, but the hope is that the sketch has at least provided a footing upon which to launch meaningful discussion about how we might move to the next step of pulling together a full case-study for consideration. That, in turn, could serve as the blueprint for a proof of concept scenario and work from there. 

There are loads of useless junk being taught in schools. Things with such a small chance of ever being needed, and so easily compensated for by reading about it for 5 minutes at the unlikely point in the future where you would need it, could and should be replaced with material being suggested here: daily running, chess, meditation and so on. Additionally, we could be teaching children, training their minds, on things like how to figure out a strategy for a game you’ve never played before, or the different kinds of military engagements. Let those take the place of memorizing which countries were on which side of the 100 Years War or which kinds of slugs use which kinds of mating organs (not kidding). Even when we do get kids into a physical education class, do you know how many hours of U.S. physical education class time are still burned every year on stuff like square dancing and bowling?

Make no mistake, the coming age of warfare will be fought in and with the minds, and that means that the classrooms of our education system will be the plutonium mines of the next fifty years.

End Notes





[v] From The Psychology of Human Thought edited by Robert J Sternberg and Edward E Smith Cambridge University Press, 1988 P. N. Johnson-Laird “A Taxonomy of Thinking“

[vi] Studies in Cognitive Finitude By Nicholas Rescher p120 Transaction Books, Rutgers, NJ 2006




[x] Physical Exercise Prevents Stress-Induced Activation of Granule Neurons and Enhances Local Inhibitory Mechanisms in the Dentate Gyrus. Timothy J. Schoenfeld, Pedro Rada, Pedro R. Pieruzzini, Brian Hsueh, and Elizabeth Gould P. Rada's and P. R. Pieruzzini's present address: Laboratory of Behavioral Physiology, School of Medicine, University of Los Andes, Mérida, Venezuela. Author contributions: T.S. and E.G. designed research; T.S., P.R., P.R.P., and B.H. performed research; T.S., P.R., and E.G. analyzed data; T.S. and E.G. wrote the paper. The Journal of Neuroscience, 1 May 2013, 33(18): 7770-7777; doi: 10.1523/JNEUROSCI.5352-12.2013

[xi] Effects of Diet on Brain Plasticity in Animal and Human Studies: Mind the Gap TytusMurphy, Gisele Pereira Dias, and Sandrine Thuret Institute of Psychiatry, King’s College London,The James Black Centre, 125 Coldharbour Lane, London SE5 9NU, UK Correspondence should be addressed to SandrineThuret; Sandrine.1.thuret@kcl.ac.uk Received 14 January 2014; Accepted 17 March 2014; Published 12 May 2014.


[xiii] Dedifferentiated Schwann Cell Precursors Secreting Paracrine Factors Are Required for Regeneration of the Mammalian Digit Tip Adam P.W. Johnston5,correspondenceemail, Scott A. Yuzwa, Matthew J. Carr, Neemat Mahmud, Mekayla A. Storer, Matthew P. Krause6, Karen Jones, Smitha Paul, David R. Kaplan, Freda D. Miller7,Cell Stem Cell Vol. 18 Iss. 6, June 2, 2016.

[xiv] Gebara E, Udry F, Sultan S, Toni N. Taurine increases hippocampal neurogenesis in aging mice. Stem Cell Res. 2015 May;14(3):369-79.

[xv] Wang Q, Zhu GH, Xie DH, Wu WJ, Hu P. Taurine enhances excitability of mouse cochlear neural stem cells by selectively promoting differentiation of glutamatergic neurons over GABAergic neurons. Neurochem Res. 2015 May;40(5):924-31

[xvi] Does Brain Training Work? Experts are skeptical about the effectiveness of games that claim to improve cognitive function. The Scientist magazine, By Abby Olena | April 21, 2014.

[xvii] p.22 Targeting: The Challenges of Modern Warfare edited by Paul A.L. Ducheine, Michael N. Schmitt, Frans P.B. Osinga

[xviii] Selective visual attention to drive cognitive brain–machine interfaces: from concepts to neurofeedback and rehabilitation applications. Elaine Astrand, Claire Wardak and Suliann Ben Hamed*CNRS, Cognitive Neuroscience Center, UMR 5229, University of Lyon 1, Bron Cedex, France

Erik Richardson is an adjunct college instructor for business, psychology, and political science. With years of teaching experience at grade levels from kindergarten to college, he is currently a grad student in psychology. He holds a BA from Truman State University, an MA from University of Missouri - Columbia, an MBA from Marquette University, and has done additional graduate work in education. He is president of Richardson Ideaworks, which provides e-learning and business consulting services, and he is currently engaged in freelance work with FMSO, Ft. Leavenworth, and the MMOWGLI group at the MOVES Institute (Naval Postgraduate School).

No comments: