By inspection, the code in BaseContext.validateMaskTexture() will always create a brand new mask storage buffer and zero out the current data on how much accumulated mask data is waiting to be flushed and rendered when it creates a new texture.
For the case where a new operation is too large for the buffer and we already have a texture created, this is OK because the code flushes the vertex buffer in that case.
But, for the case where we experience a surface-lost situation on the mask texture (which is rare enough in the first place) and we have mask data waiting to be flushed (which makes the situation even rarer because it means that this must happen while we are in the middle of rendering a frame), then at the top of the function we null out the mask texture and all we have to do is recreate the texture. Unfortunately the code that follows cannot flush the vertex buffer because the mask is null and so it simply recreates the mask texture - but it also creates a brand new null mask buffer and zeros out the accumulated mask coordinates at the same time.
This is not high risk for 2 reasons:
- The worst outcome is we render the wrong information for one frame, but recover on the next
- The likelihood of a surface lost event after having already accumulated some mask data for a frame should be exceedingly small to non-existant. I'm not even sure how to construct a test case to try to tickle this result.
For the case where a new operation is too large for the buffer and we already have a texture created, this is OK because the code flushes the vertex buffer in that case.
But, for the case where we experience a surface-lost situation on the mask texture (which is rare enough in the first place) and we have mask data waiting to be flushed (which makes the situation even rarer because it means that this must happen while we are in the middle of rendering a frame), then at the top of the function we null out the mask texture and all we have to do is recreate the texture. Unfortunately the code that follows cannot flush the vertex buffer because the mask is null and so it simply recreates the mask texture - but it also creates a brand new null mask buffer and zeros out the accumulated mask coordinates at the same time.
This is not high risk for 2 reasons:
- The worst outcome is we render the wrong information for one frame, but recover on the next
- The likelihood of a surface lost event after having already accumulated some mask data for a frame should be exceedingly small to non-existant. I'm not even sure how to construct a test case to try to tickle this result.