In case you've always wondered why setting alpha to 0.9 doesn't return 0.9 when you read it back...
Internally, alpha is stored as an integer between 0 and 255, and auto-rounded/converted from the input of [0, 1.0]. This is braindumb design, but a fact of life.
So if you add 0.1 ten times to an alpha of 0, you get a number less than 1.0, and need to go to eleven (going to eleven is usually a good thing, but in this case it isn't).
If you are incrementing the alpha by 0.01, the actual alpha will be off more than 20% by the 100th iteration (!).
So the only way to keep alphas honest is to store the last value you tried to set alpha to, and use that as the input in your code.
Moral: Don't use the existing value of alpha (ever), always set it and manipulate it from a model.