Purposes – “Learning to survive in harmony”
The Brain “should facilitate social networking between organisations, groups and individuals. The resulting global social network would itself be part of its ‘nervous system’.” H.G Wells – Critical Introduction, p 27
The global social network Wells envisions serves more as a “clearing house of misunderstandings” than a place to post ones opinions. Interacting with the network reveals a “common conception of a common purpose.”
The CWM quotes Wells, who predicts that “knowledge workers will move from the assembly of knowledge to its digestion, with the ultimate objective of achieving wisdom.”
According to Wells, wisdom is “a sense of knowing what to do, when handling complex problems that require understanding and effective decisions.”
H.G. Wells himself admitted the World Brain concept originated from the wisdom of a 17th century polymath, named John Amos Comenius, author of Patterns of Universal Knowledge (4).
Comenius cried out for an “immediate remedy” to the “madness of mutual destruction” and sought to cultivate “an internal peace of minds inspired by a system of ideas and thinking”. H.G. Wells further expands upon the system, which involves us in “knowledge working” and “interactive processing”. The purpose of his vision is for humanity to collectively form a trusted “intellectual authority”.
The public collaborates with universities and research institutions throughout the world. Working together, the system would help its participants distinguish “bed rock fact” from “visions, projects, and theories”.
Pierre Teillard de Chardin’s idea of a “noosphere” is noted as a more spiritual dimension of the World Mind concept. The noosphere is a network of ideas that brings the evolution of the natural world into union with God at the Omega Point. In his theory de Chardin hypothesizes that as structural complexity increases, so does consciousness (cwm).
Peter Russell and Hans Swegen both view the world mind as an “inevitable pattern of evolution”, but James Lovelock, like H.G. Wells, sees it as an instrumental matter of social construction and education. Lovelock puts forth a “project needing human attention, major policy decisions and some form of money to realize” and that we should focus on creating “a guide-book for our survivors to help them rebuild civilization without repeating too many of our mistakes” (cwm).
I asked Jan, which one of civilization’s mistakes would “our survivors” learn the most from. The nuclear crisis in Fukishima come first to his mind. Jan suggests the World Mind would be concerned with “the legacy of nuclear waste that we’re leaving behind.” Its purpose would be to make sure that the next generation understands how to keep the nuclear waste cool and disposed.
An Open Intelligence collaborator from Arkansas, Elle D’Coda, bemoans the inadequate response on “global government apathy”. She questions why every nuclear scientist on the planet had not been called into action in Fukishima immediately.
Futurist, author, and researcher Ben Goetzel argues that the nuclear crisis in Japan would’ve been avoided had we invested more attention in developing artificial intelligence. He suggests that using a computer simulation with an“AI-powered ‘artificial nuclear scientist'” would’ve foreseen how to remedy the disaster, as well as any other potential catastrophe. With artificial intelligence a greater-than-human mind could serve the purpose of protecting humanity from a wide variety of disasters we’ve yet to fully comprehend as a species.
Artificial Intelligence is often associated with memes surrounding the concept of “singularity” which Research Fellow Eliezer Yodkowsky, from the Singularity Institute for Artificial Intelligence, says carry “unsavory connotations”. On his site he turns our attention to the idea of an “intelligence explosion” coined by a famous statistician I.J. Good, which refers to how technology closes a loop that begins to improve minds in a positive feedback cycle. Through this cycle a mind would learn how to repeatedly reprogram itself until finally upgraded into a superintelligence. This greater-than-human intelligence presents existential risks to humanity which Yodkowsky argues are “the hardest to discuss”, which makes a superintelligence “more worrisome, not less” (2).