This thread is for discussions or debates of elements in personal identity theories, yours or whoevers. I'm thinking this might be a little different from a couple of the presently pinned topics, since I hope that deeper inferences of what personal identity is can be discussed in addition to what personal identity is.
Considering six basic components to a problem -- three in the acquaintance of the model, which could be the representations of (1) the problem domain, (2) the evaluative function, and (3) the optimal solution codomain, and three in the least formal acquaintance, which could be (4) the situation, (5) the manipulator function, and (6) the end state range -- I find it acceptable to develop (3) with the minimum possible known-physics bias, with (1) being more grounded and (2) being, perhaps by far, the most grounded, though to the extent it doesn't go beyond its permitted budget for the model.
For my personal identity theory, the optimal solution is dynamic, developed along with the intersection of conceivability and possibility limits. One apparent limit in this intersection (always restricted by my ability to form and prove general qualities, let alone to express them) may be observed with the statement that given an appropriate definition of omniscience, omniscience is impossible if it's possible for a distinct but not necessarily disjoint cognitive particle P1 in n-dimensional space to have the higher predictive capability while being logically unable to communicate all its knowledge of cognitive particle P2 to P2, a resident of an m-dimensional space, where m < n. This is supposed to mean that any cognitive particle never can know whether or not it knows everything when it's numerically identical, in the limit, with its observable universe.
One moral that can accompany this observation is simply to deny that it's sensical and then go on feeling like and claiming to be God when the horizon of your selected empirical universe is most likely the epistemic horizon of your cognitive particle. Another moral that can attach to it, which happens to be mine, is to accept it and then go on to possess a good definition of arrogance, when you're godlike and probably fooled in being godlike, and attempt to avoid being arrogant. Intrinsic pharmaceutical technology (or perhaps surjective injections from P1) should be a good partial solution for analogue pleasures to master morality for an otherwise equal state. Where my personal identity theory might seem to reflect an epistemological skepticism, however, I would suggest rather that it's currently a decent response to epistemological skepticism and an input that an optimal solution is likely going to need.
Among yet other optimal-solution informants with regard to personal identity theory, I want to be further along in clarifying the issue of redundant "qualitatively identical" persons. At first glance, it would seem wise for a cognitive particle to be a set of entangled qualitatively identical persons. In the designs of some (though not all) functionalists, the potential permanent death of one person instance is morally permissible simply because its status as a moral subject is numerically identical with one or more person instances in the set: in replacing it with a construction of another person instance, no loss is incurred beyond having to tolerate a slight inconvenience in manufacturing, and no moral failure is incurred regardless.
For other functionalists, those disposed to a greater radical particularism, that a cognitive particle's persons are qualitatively identical wouldn't be enough, not even if their executive experience component denied them knowledge of their relative positions to each other in x-space. Subdoxastic objects, even with respect to the necessary internalistic epistemology, ensure that each person instance is a moral subject of its own moral agency. Contrary to wishes, this is unfortunate, because it seems that an executive operator in chosen denial (or pure ignorance) can't increase its unitary durability by increasing the number of its independent clones and certainly not by increasing the number of autonomous lovers of life with the freedom and ability to configure "surprising" end states, also very if not more unfortunately.
If you have thoughts on this, or if you have other thoughts just with respect to the topic, please share them.