hidden gems in java 16 and 17
Abstract
There are some nice feature in Java 17:
- period of day added to
java.time
formats:
Stream.toList
added (!=Collections.unmodifiableList
which does not acceptnull
)Stream.mapMulti
to perform zero-to-, one-to-one or one-to-many operationsjava.util.HexFormat
class was added
There’s much more to a new Java release than the well-known JEPs.
Each Java release has many targeted JDK Enhancement Proposals (JEPs), which are all discussed in articles and conference talks. But releases, including Java 16 and the soon-to-be-ready Java 17, also have hidden gems, including features, deprecations, removals, enhancements, additions, and bug fixes. While those aren’t given the same attention as the major JEPs, those gems can be very significant.
Here, I’ll present some of the gems in both JDK 16 and the upcoming JDK 17 long-term support (LTS) releases. These are, admittedly, gems from my point of view. And, yes: I know it’s strange to describe deprecations and removals as “gems.” However, those are parts of Java that have outlived their usefulness.
As of the time of writing this article, I’m using the early experience Java 17ea-build25 jshell
tool. If you’d like to follow along and test the JDK 17 features, download a Java 17 build, fire up a terminal, check your version, and run jshell
. Note that you might be seeing a newer version of the build.
And now, without further ado, here are the hidden JEP and non-JEP gems in JDK 16 and JDK 17.
Feature (JDK 16): Period-of-day was added to java.time formats
Some developers might want to express periods in a day, such as “in the morning,” “in the afternoon,” or “at night,” not just a.m. or p.m. To address that, there’s a new formatter pattern, called B
, and its supporting method has been added to the java.time.format.DateTimeFormatter
and DateTimeFormatterBuilder
classes.
The following example translates the day periods and produces day period text depending on the time of the day and locale:
For more information, see the documentation for class DateTimeFormatter
.
Feature (JDK 16): Stream.toList() method was added
Since its introduction in Java 8, the Stream API has been criticized for verboseness. For example, performing a simple mapping transformation on a list of strings numbers to a list of integers requires writing as much as
With the updated default method toList()
in the Stream
interface, you can now write shorter, clearer code such as
This is more than a simple shortcut for two reasons.
- While the specification does not guarantee it,
collect(toList())
produces a mutable list and many users already rely on this fact. While the new methodStream.toList()
produces an unmodifiable list, it is no shortcut tocollect(toUnmodifiableList())
, becauseoUnmodifiableList()
doesn’t accept nulls. - The implementation of
Stream.toList()
is not constrained by theCollector
interface; therefore,Stream.toList()
allocates less memory. That makes it optimal to use when the stream size is known in advance.
For more information, see the documentation for Stream.toList()
.
Feature (JDK 16): Stream.mapMulti() method was added
The mapMulti()
method signature is as follows:
As you can see, it is an intermediate operation method added as a default method to the Stream
interface. Other versions include type-specific methods for int
, long
, and double
: mapMultiToInt()
, mapMultiToLong()
, and mapMultiToDouble()
.
These operations return a stream in which each element of this stream is replaced with zero or more elements.
The replacements are performed by applying the provided mapping function to each element in conjunction with a consumer argument that accepts replacement elements. The mapping function calls the consumer zero or more times to provide the replacement elements.
Let’s see how to use mapMulti()
in three scenarios.
First scenario: Zero-to-one (0…1) mapping. Using the mapMulti() mapper.accept(R r)
(consumer) for a few selected items achieves a filter-like pipeline. For example, using it might help check an element against a predicate and then map it to a different value.
In the following code snippet, you want only to filter “Java” and its project names that have a length greater than or equal to five characters, and then you want to replace each name with its equivalent length:
If you didn’t have the mapMulti()
method, you would need to perform this task with a combination of filter
and map
instead.
Second scenario: One-to-one (1…1) mapping. Using the previous example again, if you omit the condition and every element in the stream is mapped into a new one and accepted using the mapper, you’d get the following result. In this example, mapMulti()
effectively behaves like a map
.
Third scenario: One-to-many (1…*) mapping. As mentioned above, the mapper.accept(R r)
(consumer) could be called any number of times. Let’s modify the code snippet to replace each project name’s characters with the name’s length. For example, “Java” becomes “4444,” “Panama” becomes “666666,” and an empty string becomes nothing.
When to use mapMulti() instead of flatMap(). The main idea with mapMulti
is that its mapper can be called multiple times (and zero times). Furthermore, the method’s use of the SpinedBuffer
internally allows mapMulti
to push the elements into a single flattened Stream
instance without creating a new one for every group of output elements. That’s a key difference from flatMap
.
As the API documentation notes, mapMulti
states two use cases where using it is preferable over using flatMap
.
- When replacing each stream element with a small (possibly zero) number of elements. Using this method avoids the overhead of creating a new
Stream
instance for every group of result elements, as required byflatMap
. - When it is easier to use an imperative approach for generating result elements than it is to return them in the form of a
Stream
.
The documentation adds that because it creates only one Stream
at a time, “performance-wise, the mapMulti
is a winner in such cases.”
For more information, see the Stream.mapMulti()
and Stream.flatMap()
documentation.
Bug fix (JDK 16): C-style array declarations are not allowed in record components
Before JDK 16, the javac
compiler accepted C-style array declarations in record components, but the record specification for JDK 16 forbids that. In particular, the compiler had accepted code like the following:
record R(int i[]) {}
This code is no longer acceptable by the compiler according to the Java specification for Record
in JDK 16, and if you try it, you will get the following error:
jshell> record R(int i[]) {}
| Error:
| legacy array notation not allowed on record components
| record R(int i[]) {}
| ^
For correct compilation, the record should be declared as follows:
jshell> record R(int[] i) {}
| created record R
Bug fix (JDK 16): Annotation interfaces may not be declared as local interfaces
Before JDK 16, the javac
compiler accepted annotations declared as local interfaces. For example, the javac
compiler had accepted code such as the following:
class C {
void m() {
@interface A {}
}
}
This code is no longer acceptable according to section 14.3 of the JDK 16 Java Language Specification, which says “A local interface may be a normal interface (§9.1), but not an annotation interface (§9.6).”
Therefore, trying to run the above code will lead to the following error:
jshell> class C {
...> void m() {
...> @interface A {}
...> }
...> }
| Error:
| annotation type declaration not allowed here
| @interface A {}
| ^-------------^
Bug fix (JDK 16): A NullPointerException is thrown if the first argument to Path.of or Paths.get is null
The var args
form of the Path.of()
and Paths.get()
methods has been changed in the JDK 16 release to throw NullPointerException
consistently when the first parameter is null, as in the following:
jshell> Path path = Path.of(null,"path/to/file")
| Exception java.lang.NullPointerException
| at Objects.requireNonNull (Objects.java:208)
| at UnixFileSystem.getPath (UnixFileSystem.java:263)
| at Path.of (Path.java:147)
| at (#21:1)
Historically these methods missed the null check on the first parameter when invoked with more than one parameter, so it would give you an unhappy result like the following if the code were run under JDK 11:
jshell> Path path = Path.of(null,"path/to/file")
path ==> null/path/to/file
Enhancement (JDK 16): The line terminator definition was changed in java.io.LineNumberReader
In some situations, Java has a problem when reading file lines that don’t have the specified terminator. For example, Java could have a problem with a file containing the following sequences:
line 1\n
line 2\n
line 3
For example, I am running the following code on the JDK 14.0.2 jshell
tool:
-
Running
jshell
tool[mtaman]:~ ~~ jshell | Welcome to JShell -- Version 14.0.2 | For an introduction type: /help intro
-
Creating the string input
jshell> String text = "Line 1\n Line 2\n Line 3"; text ==> "Line 1\n Line 2\n Line 3"
-
Reading the lines and returning the number of lines in the provided text
jshell> int readLines(String string) throws IOException { ...> LineNumberReader reader = new LineNumberReader(new StringReader(string)); ...> while (reader.read() != -1) { } ...> ...> return reader.getLineNumber(); ...> } | created method readLines(String)
-
Reading the text with the
readLines()
methodjshell> readLines(text) $8 ==> 2
Before the Java 16 enhancement, the text input would have been considered to contain only two lines, each terminated by \n
, which is incorrect behavior, as you can see from the return from the readLines()
method.
After the Java 16 enhancement, it is now considered now that the file contains three lines; the third line is terminated by the end of stream. Therefore, the definition of line terminator has been extended to include the end of stream or one of the previously defined line terminators: \n
, \r
, or \r\n
followed immediately by the end of stream.
If you run the same code under JDK 16 or JDK 17, the readLines()
should return 3
, not 2
, as in the following:
[mtaman]:~ ~~ jshell --enable-preview
| Welcome to JShell -- Version 17-ea
| For an introduction type: /help intro
jshell> readLines(text)
$15 ==> 3
Feature (JDK 17): The java.util.HexFormat class was added
The new dedicated class HexFormat
converts between bytes and chars and hex-encoded strings, including additional formatting markups such as prefixes and suffixes and delimiters.
HexFormat
is a value-based class. Note that the use of identity-sensitive operations—including reference equality, identity hash code, or synchronization—on instances of HexFormat
may have unpredictable results and should be avoided. Instead, use the equals
method or comparisons. By the way, HexFormat
is immutable and thread-safe.
For example, an individual byte could be converted to a string of hexadecimal digits using toHexDigits(int)
and converted back to a primitive value using fromHexDigits(string)
, as in the following:
jshell> HexFormat hex = HexFormat.of()
hex ==> uppercase: false, delimiter: "", prefix: "", suffix: ""
jshell> byte b = 127;
...> String byteStr = hex.toHexDigits(b);
b ==> 127
byteStr ==> "7f"
jshell> byte byteVal = (byte)hex.fromHexDigits(byteStr);
byteVal ==> 127
jshell> byteStr.equals("7f");
$19 ==> true
jshell> b == byteVal;
$20 ==> true
For more information, see the HexFormat
documentation.
JEP 406 (JDK 17): Pattern matching was added for a switch (preview)
JEP 406 enhances the Java programming language with pattern matching for switch
expressions and statements and extensions to the language of patterns. These capabilities are a preview feature in JDK 17.
Extending pattern matching to switch
allows an expression to be tested against several patterns, each with a specific action, so complex data-oriented queries can be expressed concisely and safely.
For example, you might want to use patterns to test the same variable against a number of possibilities, taking a specific action on each, but since the existing switch
does not support that, you end up with an ugly chain of if...else
tests such as the following:
This code benefits from using pattern instanceof
expressions, but that is far from perfect. Why?
- The
if...else
approach allows coding errors to remain hidden because it uses an overly general control construct. - The above code is not optimizable. Without compiler heroics, the
if...else
chain will have O(n) time complexity, even though the underlying problem is often O(1).
However, switch
is a perfect platform for pattern matching. Thus, if the switch
statements and expressions are extended to work on any type and allow case labels with patterns as well as constants, the above code could be rewritten more clearly and reliably as follows:
The semantics of this switch
are precise: A case
label with a pattern matches the value of the selector expression if the value matches the pattern. Furthermore, and as a bonus, this code is optimizable; in this case, the JVM is more likely to perform the dispatch in O(1) time.
By the way, this JEP provides great support to pattern matching for null. Traditionally, switch
statements and expressions throw NullPointerException
if the selector expression evaluates to null. Therefore, all your testing for null must be done outside of the switch
, as in the following:
However, if switch
allows a selector expression of any type, and case
labels can have type patterns, you could integrate the null test into the switch
, as follows:
Isn’t this code cleaner? In the previous code, with a case
null, the switch
executes the code associated with that label. Without a case
null, the switch
throws NullPointerException
, just as before.
Pattern matching with switch
works with sealed types as well, so if the type of the selector expression is a sealed class (JEP 409, Sealed classes), the type coverage check can consider the permits
clause of the sealed class to determine whether the switch
block is complete. Consider the following example of a sealed interface S
with three permitted subclasses: A
, B
, and C
.
The compiler can determine that the type coverage of the switch
block is types A
, B
, and C
. Since the type of the selector expression, S
, is a sealed interface whose permitted subclasses are exactly A
, B
, and C
, this switch
block is complete. As a result, no default label is needed.
For more information, see the page for JEP 406: Pattern matching for switch (preview).
JEP 411 (JDK 17): The Security Manager was deprecated for removal
In the era of Java applets downloaded by web browsers, the Security Manager protected the integrity of users’ machines and the confidentiality of their data by running applets in a sandbox. This sandbox denied access to resources such as the file system or the network.
The Security Manager drew a clear line between untrusted code (applets from a remote machine) and trusted code (classes on the local machine): It would approve all operations involving resource access for trusted code but reject them for untrusted code.
Meanwhile, the confidentiality of data was protected by the Java class libraries’ trusted implementations of modern cryptographic algorithms and protocols such as SHA-3, EdDSA, and TLS 1.3. Because security is a dynamic science, the JDK engineers continuously update the Java platform to address new vulnerabilities and to reflect new industry postures, for example, by deprecating weak cryptographic protocols.
Therefore, the Java Platform Group decided that it’s time to begin deprecating the Security Manager for removal in a future Java release. This is old functionality; Security Manager has been part of the platform since Java 1.0. That said, Security Manager has not been the primary means of securing Java client-side code for many years, and it has been rarely used to secure Java server-side code.
Deprecation of the Security Manager is being done in concert with the deprecation of the legacy Applet API (JEP 398), also targeted for JDK 17. The Applet API is being deprecated for removal because it is irrelevant now since all web browser vendors have either removed support for Java browser plugins or announced plans to do so.
For more information, see the page for JEP 411: Deprecate the Security Manager for removal.
JEP 306 (JDK 17): Always-strict floating-point semantics have been restored
Java currently supports two different models for floating-point operations. One is the strict floating-point semantics, which are supported by AMD and Intel microprocessors. However, Java’s default is a slightly different floating-point semantics scheme.
This split occurred back in Java SE 1.2, when there were some issues with the x87 math coprocessor. It’s no longer needed because all of today’s processors support SSE2 (Streaming SIMD Extensions 2) and later extensions in a way that eliminates the need for the default semantics.
Therefore JEP 306’s intention is to make floating-point operations consistently strict by restoring the original floating-point semantics to the language and JVM.
Read about this in the documentation for JEP 306 (JDK 17): Restore always-strict floating-point semantics.
Feature (JDK 17): ISO 639 language codes for Hebrew, Indonesian, and Yiddish default to current codes
Before JDK 17, the class constructor for Locale
converted three ISO 639 language codes to their earlier, obsoleted forms.
he
maps toiw
yi
maps toji
id
maps toin
In JDK 17, they default to the current codes. For example, id
is now the language code for Indonesian instead of in
.
However, if you need the previous mappings for some reason, a new system property has also been introduced to revert to the legacy behavior. If -Djava.locale.useOldISOCodes=true
is specified on the command line, the class constructor for Locale
behaves in the same way as in prior releases.
For more information, see the release note.
Feature (JDK 17): Support was added for CLDR version 39
While I’m talking about Locale
, JDK 17 has been upgraded to support Locale
data based on the Unicode Consortium’s Unicode Common Locale Data Repository (CLDR) version 39. This CLDR version was released in April 2021.
To see what’s different in CLDR 39, see the Unicode Consortium’s release note.
Feature (JDK 17): Asynchronous log flushing was added to unified JVM logging
To avoid undesirable delays in a thread using unified JVM logging, you now can request that the unified logging system operate in asynchronous mode by using the -Xlog:async
command-line option.
In asynchronous logging mode, all logging messages are queued to a buffer, and a standalone thread is responsible for flushing them to the corresponding outputs. The intermediate buffer is bounded; on buffer exhaustion, the enqueuing message is discarded.
If you wish to control the intermediate buffer size, you can use the command-line option -XX:AsyncLogBufferSize=<bytes>
.
For more information, see the release note.
Bug fix (JDK 17): Support was added for specifying a signer in the keytool -genkeypair command
According to the Internet Engineering Task Force (IETF) Request for Comments, RFC 8410 section 10.2 provides an example of an X25519 certificate using Ed25519 to sign an X25519 public key. However, before JDK 17, the keytool
utility’s -genkeypair
command can’t generate the key agreement certificate, such as the important X25519 certificate.
To support this case in JDK 17, the -signer
and -signerkeypass
options have been added to the -genkeypair
command of the keytool
utility.
- The
-signer
option specifies the keystore alias of aPrivateKeyEntry
for the signer. - The
-signerkeypass
option specifies the password used to protect the signer’s private key.
These options allow -genkeypair
to sign the certificate using the signer’s private key. This is very important if you want to generate a certificate with a key agreement algorithm as its public key algorithm.
For more information, see the release note.
Deprecation (JDK 17): The socket factory methods implementation was deprecated
Java used to have green threads, at least for Oracle Solaris, but modern versions of Java use what’s called native threads. Native threads are nice but relatively heavy in terms of resource utilization. You might need to tune the operating system if you want to have tens of thousands of them.
To overcome such threading limits and complexity, Project Loom comes to the rescue by introducing continuations (coroutines) and fibers (a type of green threads), allowing you to choose between threads and fibers. With Loom, even a laptop can efficiently run millions of fibers, opening the door to new, or not so new, paradigms.
In favor of fibers, and to ease the integration with Loom in the future when the project is complete, a process was proposed for replacing the underlying implementation used by java.net.Socket
and java.net.ServerSocket
. The new implementation makes it easy to adapt to user-mode threads, that is, fibers.
In preparation for this overhaul, several changes to java.net.Socket
, ServerSocket
, and the existing underlying implementation are needed. Specifically, Java 17 deprecates the following static methods used to set the systemwide socket implementation factories:
It also deprecates the following two types:
These API points were used to statically configure a systemwide factory for the corresponding socket types in the java.net
package. Unfortunately, these methods have mostly been obsolete since Java 1.4. For more information, see the release note.
Removal (JDK 17): The sun.misc.Unsafe::defineAnonymousClass method was removed
Java is moving towards strongly encapsulating all internal elements of the JDK except for critical internal APIs such as sun.misc.Unsafe
. The Java team is making sure that removal is gradual, while providing you with good alternatives.
For example, hidden classes (JEP 371) were added in JDK 15 to replace the JVM’s anonymous class. The sun.misc.Unsafe::defineAnonymousClass
was deprecated in JDK 15, then deprecated for removal in JDK 16, and finally will be removed in JDK 17, while providing the following API as replacement:
For more information, see the release note.
Conclusion
Oracle is keeping Java relevant through the six-month release cadence. This new release cycle brings to the language preview features so they can be tested and feedback can be collected. If there’s no more noise around a feature, then it is time for it to be standardized in the next release.
This article unearthed many of the hidden gems in JDK 16 and JDK 17. Remember, look beyond the JEPs when evaluating and adopting a new JDK release.