A DESCRIPTION OF THE PROBLEM :
Having chars that are no longer chars but, are apart of a set isn't acceptable for any applications wanting to easily manipulate and compare string values.
Reasons To Increase:
You have to cast char to int anyways in default Unicode so increasing the default char value wouldn't be a problem
Unicode Upgraded it's stats a while ago and it's time for the full upgrade
You already have an optimized language which only uses the amount of bytes based on the char values anyways so in reality it would be the same or less in memory changing char max value especially since there are now less indexes
It's very difficult to support unicode with string manipulation already why not just force apps to recompile with chars new max value?
Having chars that are no longer chars but, are apart of a set isn't acceptable for any applications wanting to easily manipulate and compare string values.
Reasons To Increase:
You have to cast char to int anyways in default Unicode so increasing the default char value wouldn't be a problem
Unicode Upgraded it's stats a while ago and it's time for the full upgrade
You already have an optimized language which only uses the amount of bytes based on the char values anyways so in reality it would be the same or less in memory changing char max value especially since there are now less indexes
It's very difficult to support unicode with string manipulation already why not just force apps to recompile with chars new max value?