This post aims to dig into the subject, in the most neutral way possible. This way, everybody can take enlightened decisions based on facts, not by listening to people full of misconceptions or hidden agendas.
Annotations are available since Java version 5, codenamed Tiger, and released in 2004.
In the Java computer programming language, an annotation is a form of syntactic metadata that can be added to Java source code. Classes, methods, variables, parameters and Java packages may be annotated.
— Wikipedia
https://en.wikipedia.org/wiki/Java_annotation
The most simple annotation looks like the following:
@MyAnnotation public class Foo {}
Because of the lack of annotations, previous Java versions had to approach some features in oblique ways.
Replacement of marker interfaces
Since Java’s inception, there has been a need to mark a class, or a hierarchy
of classes. Before Java 5, this has been done through interfaces with no methods.
Serializable
and Cloneable
are two examples of such interfaces.This kind of interface obviously is unlike any other: they don’t define any contract between themselves and their implementing classes. Hence, they have earned the name of marker interfaces.
People new to Java will in general ask questions related to that approach. The reason for that is because it’s a trick. Annotations remove the need for that trick, and keep the contract role of interfaces.
public class Foo implements MarkerInterface {} // 1
@MyAnnotation
public class Foo {} // 2
Better metadata management
Deprecation is the process of flagging an API as obsolete. This way, users are informed about the change, can decide to stop using the API, and the latter may be removed with less impact in future versions. Prior to Java 5, deprecation was set in the JavaDocs:
/**
* Blah blah JavaDoc.
*
* @deprecated As of JDK version 1.1,
*/
public class DeprecatedApi {}
Obviously, this is a very fragile approach: the only way to leverage it is via the
javadoc
tool. The standard JavaDocs has a section dedicated to such deprecated APIs. Alternatively, the javadoc
tool can be configured via a custom doclet, to process Javadoc metadata (including but not limited to @deprecated
) in any desired way.Java 5, deprecation is flagged with the provided
@Deprecated
annotation:/**
* Blah blah JavaDoc.
*/
@Deprecated
public class DeprecatedApi {}
NOTE: Old deprecated APIs keep the old approach, so they use both metadata and annotation.
Additionally, since Java 9,
@Deprecated
allows two elements:forRemoval
(of type boolean
): indicates whether the annotated element is subject to removal in a future versionsince
(of type String): returns the version in which the annotated element became deprecated@Deprecated(since="1.2", forRemoval=true)
public abstract class IdentityScope extends Identity {}
To create an annotation, one uses the
@interface
keyword:public @interface MyAnnotation {}
However, this is not enough, as such an annotation cannot be set anywhere. Annotations require two more pieces of information:
We will get into more detail later. As for now, we first need to
understand how annotations work. While classes inherit code from their
parent class(es), annotations are composed.
@Target(ElementType.ANNOTATION_TYPE) // 1
@interface Foo {}
@Target(ElementType.ANNOTATION_TYPE) // 1
@interface Bar {}
@Foo
@Bar
@interface Baz {} // 2
@Target
, it will be explained further down@Baz
is transitively annotated with both @Foo
and @Bar
Here’s the source code of
@Target
and @Retention
:@Retention(RetentionPolicy.RUNTIME) // 2
@Target(ElementType.ANNOTATION_TYPE) // 1
public @interface Target {
ElementType[] value();
}@Retention(RetentionPolicy.RUNTIME) // 2
@Target(ElementType.ANNOTATION_TYPE) // 1
public @interface Retention {
RetentionPolicy value();
}
@Target
annotations tells on which element the annotation can be set:@Retention
annotation defines up to which step in the compilation process the annotation will be available:This is summed up in the following class diagram:
Annotations can define parameters. Parameters allow to add some level of configuration at the time the annotation is used. A parameter accepts a type and an optional default value. If the value is not set when the annotation is defined, it needs to be when it is used.
Parameter types are limited to the following:
int
, long
, etc.String
Class<T>
enum
type@Target(ElementType.CLASS)
@interface Foo {
int bar();
Class<? extends Collection> baz() default List.class;
String[] qux();
}@Foo(bar = 1, qux = { "a", "b", "c" })
class MyClass {}
If there’s a single parameter and it’s named value, its name can be omitted when set:
@Target(ElementType.CLASS)
@interface Foo {
int value();
}@Foo(1)
class MyClass {}
Since its inception, Java has allowed reflection: reflection is the capacity to get information about the code at runtime. Here’s a sample:
var session = request.getHttpSession();
var object = session.getAttribute("objet"); // 1
var clazz = object.getClass(); // 2
var methods = clazz.getMethods(); // 3
for (var method : methods) {
if (method.getParameterCount() == 0) { // 4
method.invoke(foo); // 5
}
}
public
methods available on the objectWith annotations, the reflection API got relevant improvements:
With
annotation, frameworks started to make use of them for different
use-cases. Among them, configuration was one of the most used: for
example, instead of (or more precisely in addition to) XML, the Spring framework added a configuration option based on annotations.
For a long time, both users and providers were happy with runtime reflection access to annotations. Because it’s mainly focused on configuration, reflection occurs at startup time. In constrained environments, this is too much of a load for applications: the most well-known example of such an environment is the Android platform. One would want to have the fastest startup time there, and the startup-time reflection approach makes that slower.
An alternative to cope with that issue is to process annotations at compile-time. For that to happen, the compiler must be configured to use specific annotation processors. Those can have different outputs: simple files, generated code, etc. The tradeoff of that approach is that compilation takes a performance hit every time, but then startup time is not impacted.
One of the earliest frameworks that used this approach to generate code was Dagger: it’s a Dependency-Injection framework for Android. Instead of being runtime-based, it’s compile-time based. For a long time, compile-time code generation was limited to the Android ecosystem.
However, recently, back-end frameworks such as Quarkus and Micronaut
also adopted this approach. The aim is to reduce application startup time through compile-time code generation in replacement of runtime introspection. Additionally, Ahead-of-Time compilation of the resulting bytecode to native code further reduces startup time, as well as memory consumption.
The world of annotation processors is huge: this section is a but a very
small introduction so one can proceed further if wanted.
A processor is just a specific class that needs to be registered at compile-time. There are several ways to register them. With Maven, it’s just a matter of configuring the compiler plugin:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<annotationProcessors>
<annotationProcessor>
ch.frankel.blog.SampleProcessor
</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
</plugins>
</build>
The processor itself needs to implement
Processor
, but the abstract class AbstractProcessor
implements most of its methods but process: in practice, it’s enough to inherit from AbstractProcessor
. Here’s a very simplified diagram of the API:Let’s create a very simple processor. It should only lists classes that
are annotated with specific annotations. Real-world annotation
processors would probably do something useful e.g. generate code, but this additional logic goes well beyond the scope of this post.
@SupportedAnnotationTypes("ch.frankel.blog.*") // 1
@SupportedSourceVersion(SourceVersion.RELEASE_8)
public class SampleProcessor extends AbstractProcessor { @Override
public boolean process(Set<? extends TypeElement> annotations,// 2
RoundEnvironment env) {
annotations.forEach(annotation -> { // 3
Set<? extends Element> elements =
env.getElementsAnnotatedWith(annotation); // 4
elements.stream()
.filter(TypeElement.class::isInstance) // 5
.map(TypeElement.class::cast) // 6
.map(TypeElement::getQualifiedName) // 7
.map(name -> "Class " + name + " is annotated with " + annotation.getQualifiedName())
.forEach(System.out::println);
});
return true;
}
}
Processor
will be called for every annotation that belongs to the ch.frankel.blog
packageprocess()
is the main method to overrideElement
subinterface. Here, only classes can be annotated, hence, the variable needs to be tested to check whether it’s assignable TypeElement
to access its additional attributes further down the operation chainTypeElement
Annotations are very powerful, whether used at runtime or at compile-time. On the flip side, the biggest issue is that they seem to work like magic: there’s no easy way to know which reflection-using class or annotation processor is making use of them. It’s up to everyone in one’s own context to decide whether their pros out-weight their cons. To use them without any forethinking does a great disservice to one’s code… a
disservice just as great as discarding them because of misplaced
ideology.
I hope this post shed some light on how annotations work, so one can decide for oneself.
The complete source code for this post can be found on Github in Maven format.
To go further:
First published on April 26th 2020 on A Java Geek