Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-4104222

spec for Float.valueOf() gives 0.0 for both "0.0" and "-0.0"; Implementation OK

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: P4 P4
    • 1.4.0
    • 1.1.5, 1.2.0
    • core-libs
    • beta2
    • generic
    • generic

      JLS 20.9.17 describes the process of converting a string into a float
      value. A strict interpretation of the spec calls for "0.0" and "-0.0"
      both to be converted to 0.0.

      In the JavaSoft JDK implementation of valueOf, Float.valueOf (0.0)
      returns 0.0 and Float.valueOf (-0.0) returns -0.0. This is an
      important distinction if the result is used in calculations later.

      Keep the implementation as-is (it preserves valuable information)
      and change the spec.

      JLS 20.9.17 currently
      reads, in part:

         Leading and trailing whitespace (20.5.19) characters in s are
         ignored. The rest of s should constitute a FloatValue as described
         by the lexical syntax rules:

               FloatValue:
                       Sign_opt Digits . Digits_opt ExponentPart_opt
                       Sign_opt . Digits ExponentPart_opt

         where Sign, Digits, and ExponentPart are as defined in 3.10.2. If
         it does not have the form of a FloatValue, then a
         NumberFormatException is thrown. Otherwise, it is regarded as
         representing an exact decimal value in the usual "computerized
         scientific notation"; this exact decimal value is then conceptually
         converted to an "infinitely precise" binary value that is then
         rounded to type float by the usual round-to-nearest rule of IEEE
         754 floating-point arithmetic. Finally, a new object of class Float
         is created to represent this float value.

      The problem could be fixed by changing it to this:

         Leading and trailing whitespace (20.5.19) characters in s are
         ignored. The rest of s should constitute a FloatValue as described
         by the lexical syntax rules:

               FloatValue:
                       Sign_opt UnsignedFloatValue

               UnsignedFloatValue:
                       Digits . Digits_opt ExponentPart_opt
                       . Digits ExponentPart_opt

         where Sign, Digits, and ExponentPart are as defined in 3.10.2. If
         it does not have the form of a FloatValue, then a
         NumberFormatException is thrown. Otherwise, the UnsignedFloatValue
         part is regarded as representing an exact decimal value in the
         usual "computerized scientific notation"; this exact decimal value
         is then conceptually converted to an "infinitely precise" binary
         value that is then rounded to type float by the usual
         round-to-nearest rule of IEEE 754 floating-point arithmetic. If the
         optional sign was present and was '-', the unary minus operator is
         then applied to the float. Finally, a new object of class Float is
         created to represent this float value.

      Let me make another attempt to explain why the current spec converts
      "-0.0" to 0.0: the method is defined as a two-step conversion that
      converts first to an exact value and then to a float. "-0.0" and
      "+0.0" are both converted to the same exact value, because there is
      only one exact value zero. When the exact value zero is converted to
      a float, the IEEE rules state that the result is +0.0. The modified
      spec I propose above applies the sign after the string-to-exact and
      exact-to-float conversions, which makes it possible for either of the
      two floats +0.0 and -0.0 to be returned.

            darcy Joe Darcy
            tbell Tim Bell
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: