I’m not sure if you understood my point so I’m going to repeat it just in case but with different words.
If there are only two different types of basepoints of blur you can use as a point of reference to see improvement: 1) Until text was almost impossible to read 2) Noticable blur but still legible.
Then using 1) as a reference point is less reliable than 2) . This is because if your point of reference is 1) there is a wider range of myopic blur and astigmatic blur combinations that recreate the same almost impossible to read reference point. For example, if there is no change in distance to 1), it could be A) no change in visual level B) Myopia improved but astigmatism worsened C) Astigmatism improved but myopia worsened. Albeit, A, B and C look different, in practice it is difficult to notice the difference. If instead your base point of reference is 2).then it will be easier to note improvements in myopic blur without the noise of astigmatic blur messing up with the visual signal, because as your reference of point approaches clear text the signal caused by myopic blur begins reducing, and when the myopic blur signal is 0 the signal caused by astigmatic blur begins reducing.
In addition to reducing the signal to noise ratio, an additional benefit of moving your point of reference closer to clear text is that your ability to detect differences in blur is greatly improved due to the 1/x mathematical nature of changes in clarity… (That is, your ability to notice whether your vision has changed each time you measure is improved.)
That said, in theory using 3) which is maximizing clarity of text but still having noticeable blur is the best point of reference. In practice. There is no meaning difference between 2) and 3).