The "test" vector api is implemented by vector "compare", please the following codes:
```
@Override
public abstract
VectorMask<Byte> test(VectorOperators.Test op);
/*package-private*/
@ForceInline
final
<M extends VectorMask<Byte>>
M testTemplate(Class<M> maskType, Test op) {
ByteSpecies vsp = vspecies();
if (opKind(op, VO_SPECIAL)) {
ByteVector bits = this.viewAsIntegralLanes();
VectorMask<Byte> m;
if (op == IS_DEFAULT) {
m = bits.compare(EQ, (byte) 0);
} else if (op == IS_NEGATIVE) {
m = bits.compare(LT, (byte) 0);
}
else {
throw new AssertionError(op);
}
return maskType.cast(m);
}
int opc = opCode(op);
throw new AssertionError(op);
}
```
And the masked "test" is implemented by "&& test()" which could be optimized by using the masked vector "compare". This could save one "and" instruction for predicate feature supported architecture like ARM SVE and X86 AVX-512.
```
@Override
public abstract
VectorMask<Byte> test(VectorOperators.Test op);
/*package-private*/
@ForceInline
final
<M extends VectorMask<Byte>>
M testTemplate(Class<M> maskType, Test op) {
ByteSpecies vsp = vspecies();
if (opKind(op, VO_SPECIAL)) {
ByteVector bits = this.viewAsIntegralLanes();
VectorMask<Byte> m;
if (op == IS_DEFAULT) {
m = bits.compare(EQ, (byte) 0);
} else if (op == IS_NEGATIVE) {
m = bits.compare(LT, (byte) 0);
}
else {
throw new AssertionError(op);
}
return maskType.cast(m);
}
int opc = opCode(op);
throw new AssertionError(op);
}
```
And the masked "test" is implemented by "&& test()" which could be optimized by using the masked vector "compare". This could save one "and" instruction for predicate feature supported architecture like ARM SVE and X86 AVX-512.