-
Enhancement
-
Resolution: Fixed
-
P4
-
9
The overall goal is that the tool should be able to generate a large structured collection
of java classes/interfaces/enums with a variety of constructors/methods/fields
with a variety of javadoc comments. The use case is to be able to feed these files
into javadoc, to verify that javadoc generates valid HTML output from all these different
types of input.
The generated files should all be "correct" -- we are not interested in any negative
testing with bad classes or comments.
Why this way? Traditionally, javadoc has been "tested" by running it on the JDK API,
and "eyeballing" the result. The are two problems with this approach -- first, it is very
unscientific(!) and second the user-written javadoc comments contain many HTML
and tag errors that are propagated to the output, and so it can be difficult to determine
which errors are due to bad javadoc comments, and which are due to bugs in the
javadoc tool causing bad output to be generated. Joe Darcy has led an excellent
effort to eliminate the errors in the user-written comments, but it would still be better
to have a more carefully designed set of synthetic classes and documentation to run
through javadoc.
Since the set of possible files you could generate is *huge*, it would be desirable to
design the set of generated cases fairly carefully. For example, we probably don't need
to test possible sorts of javadoc tag in all places where a javadoc comment can appear.
So, for example, it may be interesting to use simple comments when testing all the possible
places where a comment can appear, and then separately to focus a set of test cases
exercising all the javadoc tags in just a few of the places that comments can appear.
The metric for success is that when we run these files through javadoc, we should
have maximum code coverage in javadoc for those parts of javadoc that generate
HTML output.
- relates to
-
JDK-8130880 Create sampleapi regression test
-
- Closed
-