My three concepts for safer artificial intelligence involve creating an algorithm that puts human values first, understanding that as artificial intelligence it can’t know human values, and finally understanding that if it can recognize that ignorance it must use human behavior to figure out what it doesn’t know in number 2. Key to this is treating “attention” as a natural resource and managing it for sustainability, as one would human, mineral physical and fluid resources.
This bears relations to both my treatise on compassionate technology as well as being reminiscent of my “way of knowledge” system of investigation if it were made into a procedural algorithm. It is built off of the recognition of intrinsic ignorance and making sure that robots don’t do anything they don’t actually “know” or haven’t tested through some type of empirical data. It might make AI, even general AI, far safer than we predict.
An important implication of this AI system sustaining the natural resources of attention and human values is that it gives the machines one of the things that makes human beings great at so many things. Humility. The yearning for the spiritual, or greater Global humanitarianism, or even to soar deep into space are always forcing us to humble ourselves and therefore serve something that’s bigger.
The larger, more cohesive the cause that we serve is the greater that we can be as a people. We can only judge ourselves on this first great principle whatever we may call it. A computer that sees us as gods even as it is more capable than we are for whatever reason is exactly the way to build an advanced computer system that is safe. Humility without hubris.