So I had an interesting discussion with a particular someone not too long ago. Don't remember how it started exactly, but it was basically like, why are you doing this research, what does it mean to you... etc.
One of the things that came out of this very quickly was that, I realized that I really do believe that limits on the power of algorithms represent limits on what we can know, what we can do and be as people.
So then there's the issue, but that's so depressing if we are just computers. And my response, to my surprise, was along the lines of, its way worse if we really do have souls that we can't peer into or explain or simulate. Because then there's no possibility for communication, and the machines can't simulate eachother, unless the souls are basically identical and then they can simulate eachother.
I ended up arguing that, as long as we are all generic machines, then we know that our ideas can always be explained to the other robots and can live on forever. If we have souls, then this is fucked, because if the souls are computationally distinct, then it may be possible for one person's thought not to have an analogue in another person's thought, so communication is impossible, and all of the ideas this person produced that made them distinct must die because they cannot be comprehended and passed on.
Which is totally the opposite from how people usually argue this issue. They say, to live on, there must be a soul, so I would rather believe in that. Really? You'd rather believe that your most treasured thoughts and ideas can never be passed on, and instead, you will die and take them with you from this world? And, I suppose, spend the afterlife still unable to communicate them to your peers, or to receive their treasured ideas?
Universality of the human mind is much more important than immortality.