*Image:"Better Than Us," 2018 Netflix Series
|
"Yet, it's our emotions and imperfections that makes us human." -Clyde DeSouza, Memories With Maya
IMMORTALITY or OBLIVION? I hope that everyone would agree that there are only two possible outcomes after having created Artificial General Intelligence (AGI) for us: immortality or oblivion. The necessity of the beneficial outcome of the upcoming intelligence explosion cannot be overestimated.
Any AGI at or above
human-level intelligence can be considered as such, I’d argue, only if she has
a wide variety of emotions, ability to achieve complex goals and motivation as an
integral part of her programming and personal evolution. I could identify the following three most
optimal ways to create friendly AI (benevolent AGI), in order to safely navigate uncharted waters of the forthcoming
intelligence explosion:
(I). Naturalization
Protocol for AGIs (bottom-up approach);
(II). Pre-packaged, upgradable cognitive modules and ethical subroutines with historical precedents, accessible via the Global Brain architecture (top-down approach);
(II). Pre-packaged, upgradable cognitive modules and ethical subroutines with historical precedents, accessible via the Global Brain architecture (top-down approach);
(III). Interlinking with AGIs to form the globally distributed Syntellect, collective superintelligence (horizontal integration approach).
Now let’s examine in
detail the three ways to facilitate the emergence of benevolent Synthetic Superintelligence and, for those in AI research, ride the “crest of the wave” of the upcoming intelligence explosion:
I. (AGI)NP: Program AGIs with emotional intelligence, empathic
sentience in the controlled, virtual environment via human life simulation
(advanced first-person story-telling version, widely discussed in AI research).
In this essay I will elaborate on this specific method, while only briefly
touching on the other two methods, as we discuss the Global Brain in my recent book The Syntellect Hypothesis: Five Paradigms of the Mind's Evolution in great detail.
II. COGNITIVE MODULES AND ETHICAL SUBROUTINES: This top-down approach to programming meta-cognition and machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks. Simply put, one writes an ethical rule set into the machine code and adds an ethical subroutine for carrying out cost-benefit calculations. Designing the ethics and value scaffolding for AGI cognitive architecture remains a challenge for the next few years. Ultimately, AGIs should act in a civilized manner, do “what’s morally right” and in the best interests of the society as a whole. AI visionary Eliezer Yudkowsky has developed the Coherent Extrapolated Volition (CEV) model which constitutes our choices and the actions we would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”
II. COGNITIVE MODULES AND ETHICAL SUBROUTINES: This top-down approach to programming meta-cognition and machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks. Simply put, one writes an ethical rule set into the machine code and adds an ethical subroutine for carrying out cost-benefit calculations. Designing the ethics and value scaffolding for AGI cognitive architecture remains a challenge for the next few years. Ultimately, AGIs should act in a civilized manner, do “what’s morally right” and in the best interests of the society as a whole. AI visionary Eliezer Yudkowsky has developed the Coherent Extrapolated Volition (CEV) model which constitutes our choices and the actions we would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”
Will our brain’s neural code be a pathway
to AI minds? In May, 2016 I stumbled upon a highly
controversial Aeon article titled “The Empty Brain: Your brain does not
process information, retrieve knowledge or store memories. In short: your brain
is not a computer” by psychologist Rob Epstein. This article attested
to me once again just how wide the range of professional opinions may be when
it comes to brain and mind in general. Unsurprisingly, the article drew an outrage from
the reading audience. I myself disagree with the author on most fronts but one
thing, I actually agree with him is that yes, our brains are not “digital
computers.” They are, rather, neural networks where each neuron might function
sort of like a quantum computer. The author has never offered his version of
what human brains are like, but only criticized IT metaphors in his article.
It's my impression, that at the time of writing the psychologist hadn't even come
across such terms as neuromorphic computing, quantum computing, cognitive
computing, deep learning, computational neuroscience and alike. All these IT
concepts clearly indicate that today's AI research and computer science derive
their inspiration from human brain information processing – notably
neuromorphic neural networks aspiring to incorporate quantum computing. Deep
neural networks learn by doing just like children.
There’s nothing
wrong with thinking that the brain is like a computer, but in many ways the
brain is a lot different. Whereas information on a computer hard drive is laid
out and ordered, for instance, that doesn’t seem to be the case with human
memories. Arguably, they are stored holographically throughout brains regions
(as well as in the field of non-local consciousness). There are similarities
and differences. Like a computer, the brain processes information by shuffling
electrical signals around complex circuitry. Neither analog nor digital, the
brain works using a signal processing format that has some properties in common
with both.
The computational hypothesis of brain function
avers that all mental states – such
as your conscious experience of reading this sentence right now – are computational states.
These are fully characterized by their functional relationships to relevant
sensory inputs, behavioral outputs, and other computational states in between.
That is to say, brains are elaborate input-output devices that compute and process
symbolic representations of the world. Brains are computers, with our minds
being the software, simplistically speaking, of course. The physical wetware
isn’t the stuff that matters. What matters is the algorithms that are running
on top of the wetware. Theoretically, we should be able to “crack our neural
code” and reproduce it on other computational substrates. The central goal of
neuroscience is breaking this neural code –
deciphering the relationships between spatiotemporal patterns of activity across
groups of neurons and the behavior of an animal or the mental state of a
person. However, algorithmic solutions in this top-down approach in AI research
will most likely come not from neuroscience and not even from computational
neuroscience, they might come as breakthroughs from neurophilosophy, software engineering
and computer science.
As I mentioned above, the brain is a quantum neural
network. Quantum computation inside our brain is beyond classical physics, it’s
in the realm of quantum mechanics and information theory. Our brain is not a
“stand-alone” information processing organ: It functions as a central unit of
our integral nervous system with recurrent feedback with the entire organism
and the cosmos at large. Along the lines of quantum cognition theoretical
framework, we can conjecture that quantum coherence underlies the parallel
information processing that goes on in the brain’s neocortex, responsible for
higher-order thinking, and allows our minds to almost instantaneously deal with
the massive amounts of information coming in through our senses. Quantum
computing is a natural fit for that: It is massively parallel information
processing that is ultrafast and practically inexhaustible.
A strict reductionist approach that takes a
bottom-up methodology to the mind seems to be missing some crucial element.
This kind of approach, focused on local cause and effect classical mechanics
within the brain, on neurons firing across their synaptic connections, is
doomed to fail. The mind is scientifically elusive because it has layers upon
layers of non-material emergence: It’s just like a TV screen – if you’re watching a movie and could only
look at an individual pixel, you would never understand what’s going on. No
single neuron produces a thought or a behavior; anything the “mindware”
accomplishes is a vast collaborative effort between brain cells. When at work,
neurons talk rapidly to one another, forming networks as they communicate, with
several networked links resonating at different times and with different
subgroups of nodes, such that understanding the behavior of individual “pixels”
or even of smaller groups of them won't tell the whole story of what's
happening.
We need to think in terms of networks, modules,
algorithms and second-order emergence –meta-algorithms, or groups
of modules. We need these methods to see the whole screen, the bigger picture,
to see what’s playing in our minds. Ultimately, a new cybernetic approach with
a top-down holistic methodology could be applied to explain the human mind and
other multi-scale minds in creation. Human minds, as diverse as they are, occupy only a very narrow
stratum of the total space of possible minds.
Numerous studies have found that the brain
organizes itself into functional networks that vary in their activity and in
their interactions over time. One such classification gives us three major
networks: the central executive network, which is responsible for attentional
focus; the salience network which involves awareness; and the default mode
network as an “idling” mode such as inward-focused thinking and mind wandering.
The
subtleties of our psyches are being managed by smaller networks – specific
modules in our brains. A unique cognitive percept is the end result of the
processing of a module or group of modules in a layered architecture. The idea
that the brain is made up of many regions that perform specific tasks is known
as ‘modularity’. This concept of
modular organization suggests that specialized areas of the brain do different
things, with certain capacities coming up one at a time and through time they
are stitched together to give the illusion of a unitary conscious experience.
In actuality, each individual part of the brain is doing its respective job,
and each then passes information to the next level of network. This continues
until we become aware of the thought or function like sight or sound. There are
many layers in an onion, in a manner of speaking.
Overall, the brain may operate on an amazingly
simple mathematical logic. “Fire together, wire together” is perhaps
neuroscience’s most famous catchphrase. Learning activates select neurons and
synapses in the brain, which results in a strengthening of the connections
between pairs of synapses involved in storing a particular memory. With
reactivated circuits, you get retrieved memories. Corresponding algorithms for
intelligence could inform neuromorphic computing, teach artificial circuits to recognize
patterns, discover knowledge, and generate flexible behaviors. That would enable
the creation of artificial neural networks that are wired in a manner akin to
our own grey matter but embedded in a different substrate.
Once designed, pre-packaged, upgradable cognitive
modules, including ethical behavior subroutines, with access to the global
database of historical precedents, and later even “the entire human lifetime
experiences,” could be instantly available via the Global Brain architecture to
the newly created AGIs. This method for “initiating” AGIs by pre-loading cognitive
modules with ethical and value subroutines, and regularly self-updating
afterwards via the GB network, would provide an AGI with access to the global
database of current and historical ethical dilemmas and their solutions. In a
way, she would possess a better knowledge of human nature than most currently
living humans. I would discard most, if not all, dystopian scenarios in this
case, such as described in the book “Superintelligence: Paths, Dangers, Strategies” by
Nick Bostrom and more recently in the book “Human Compatible: Artificial
Intelligence and The Problem of Control” by Stuart Russell. While being
increasingly interconnected by the Global Brain infrastructure, humans and AGIs
would greatly benefit from this new hybrid thinking relationship. The awakened
Global Mind would tackle many of today’s seemingly intractable challenges and
closely monitor us for our own sake, while significantly reducing existential
risks.
Ever since Newton, materialist science has been
entrenched in our models but a new trend now emerges towards post-materialist,
information-theoretic science which is to transform any area of research into
computer science, i.e. information technology. Once the field becomes
information technology (such as genomics, for example), it jumps on its own
exponential growth curve of further development. We now see it in computational
neuroscience, connectomics, and other related fields. Computer science gives us
a new code-theoretic, substrate-independent model for looking at our brains and
our neural code. Our brains, however, do not generate consciousness since our
minds are embedded in the larger consciousness network, the topic discussed in The Syntellect Hypothesis. We humans are deep down information technology
running on genetic, neural and societal codes. Self-transcendence from a
bio-human or cyberhuman into a higher-dimensional info-being might be closer
than you think.
... to continue to Part II: Interlinking & AGI(NP) ...
... to continue to Part II: Interlinking & AGI(NP) ...
-Alex Vikoulov
*Image Credit: "Better Than Us," 2018 Netflix Series
**Original article first appeared on EcstadelicNET (ecstadelic.net) in Top Stories section on March 8 , 2016.
Finished the series, "Better Than Us" last night. I would love to see its evolution.
ReplyDelete