Friday, June 4, 2021

AGI: How to Ensure Benevolence in Synthetic Superintelligence

*Image:"Better Than Us," 2018 Netflix Series




"Yet, it's our emotions and imperfections that makes us human."   -Clyde DeSouza, Memories With Maya


IMMORTALITY or OBLIVION? I hope that everyone would agree that there are only two possible outcomes after having created Artificial General Intelligence (AGI) for us: immortality or oblivion. The necessity of the beneficial outcome of the upcoming intelligence explosion cannot be overestimated.


Any AGI at or above human-level intelligence can be considered as such, I’d argue, only if she has a wide variety of emotions, ability to achieve complex goals and motivation as an integral part of her programming and personal evolution. I could identify the following three most optimal ways to create friendly AI (benevolent AGI), in order to safely navigate uncharted waters of the forthcoming intelligence explosion:

The Quantum Code of the Flow of Time: Quantum Temporal Mechanics as a Novel QM Interpretation

"We always say the past is gone and the future is unreal, but the quantum perspective suggests they might be woven together in ways we ...

The Most Popular Posts