is now

This is an introduction series to Apache Ignite. We will discuss about Apache Ignite, its features, usage as in-memory data grid, compute grid, distributed caching, near real-time caching and persistence distributed database.

What is Ignite?

  • It is in-memory compute platform.
  • It is in-memory data grid.
  • Durable, strongly consistent and highly available.
  • Providing option to run SQL like queries on cache (Providing JDBC API to support this).

Durable memory

Apache Ignite is memory-centric platform based on durable memory architecture. It allows you to store and processing data on in-memory(RAM) and on disk (If Ignite Native persistence is enabled). When the Ignite native persistence is enabled, it will treat disk as superset of data, which is cable of surviving crash and restarts.

In-memory features

RAM is always treated as first memory tier, all the processing happens there. It has following characteristics.

  • Off-heap based: All the data and indexes are stored outside of Java heap which helps in processing petabytes of data.
  • Since all data and indexes are off-heap based, it removes noticeable GC pauses since application code is only source possible for pause-the-world events.
  • It has predictable memory usage. You can configure memory usage with MemoryConfiguration
  • It uses memory as efficient as possible and runs defragmentation routines in the background.
  • Data and indexes on disk and in-memory are stored as same page format which improved the performance and avoids unnecessary data format conversion.

Persistence features

Here are few high-level persistence features.

  • Persistence is optional to disk. You can enable or disable it.
  • It provides data resiliency. If persistence is enabled, full dataset will be stored on physical disk and you can survives cluster restarts, crashes.
  • It can execute SQL queries on full dataset.
  • Cluster restarts are instantaneous. In-memory data will be cached automatically.

In this post, we will use Spring security to handle form based authentication. You can also read my previous posts on Basic Authentication and Digest Authentication.

Technologies/ Frameworks used

Spring Boot, Spring Security, Thymeleaf, AngularJS, Bootstrap

Adding depedencies in pom.xml

In the example, we will use Spring Boot, Spring Security, Undertow and thymeleaf and will add their starters as shown below.


Spring Security Configurations

We will extend WebSecurityConfigurerAdapter class which is a convenient base class to create WebSecurityConfigurer.

public class SecurityConfig extends WebSecurityConfigurerAdapter {
  protected void configure(HttpSecurity http) throws Exception {
            .antMatchers("/static/**", "/", "/index", "/bower_components/**").permitAll()
  public UserDetailsService userDetailsService() {
    InMemoryUserDetailsManager manager = new InMemoryUserDetailsManager();
    return manager;
  SpringSecurityDialect securityDialect() {
    return new SpringSecurityDialect();

@EnableWebSecurity annotation enables the Spring Security. We have overridden the configure method and configured the security. In the above code, we have disabled the csrf request support (By default it is enabled). We are authorizing all the requests to /index, /,/static folder and sub-folders, bower_components folder and its sub-folder accessible without authentication but all other should be authenticated. We are referring /login as our login page for authentication.

In the above code snippet, we are also registering the UserDetailsService. When we enable web-security in Spring, it expects a bean of type UserDetailsService which is used to get UserDetails. For example purpose, I am using InMemoryUserDetailsManager provided by the Spring.

MVC configuration

public class MvcConfig extends WebMvcConfigurerAdapter {
  public void addViewControllers(ViewControllerRegistry registry) {

In the above configurations, we are registering ViewController and setting their names. This is all configuration that we need to do to enable Spring Security. You can find the full working project including the html files on Github.

In this post, we will externalize the properties used in the application in a property file and will use PropertyPlaceHolderConfigurer to resolve the placeholder at application startup time.

Java Configuration for PropertyPlaceHolderConfigurer

public class AppConfig {

  public PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
    PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer = new PropertySourcesPlaceholderConfigurer();
    propertySourcesPlaceholderConfigurer.setLocations(new ClassPathResource(""));
    return propertySourcesPlaceholderConfigurer;

We created object of PropertySourcesPlaceholderConfigurer and set the Locations to search. In this example we used ClassPathResource to resolve the properties file from classpath. You can use file based Resource which need absolute path of the file.

DBProperties file

public class DBProperties {
  private String userName;
  private String password;
  private String url;

  //getters for instance fields

We used @Value annotation to resolve the placeholders.

Testing the configuration

public class Main {
  private static final Logger logger = Logger.getLogger(Main.class.getName());
  public static void main(String[] args) {
    try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class, DBProperties.class);) {
      DBProperties dbProperties = context.getBean(DBProperties.class);"This is dbProperties: " + dbProperties.toString());

For testing, we created object of AnnotationConfigApplicationContext and got DBProperties bean from it and logged it using Logger. This is the simple way to externalize the configuration properties from framework congfiguration. You can also get the full example code from Github.

In this post, we will discuss about Digest Authentication with Spring Security. You can also read my previous post on Basic Authentication with Spring Security.

What is Digest Authentication?

  • This authentication method makes use of a hashing algorithms to encrypt the password (called password hash) entered by the user before sending it to the server. This, obviously, makes it much safer than the basic authentication method, in which the user’s password travels in plain text (or base64 encoded) that can be easily read by whoever intercepts it.
  • There are many such hashing algorithms in java also, which can prove really effective for password security such as MD5, SHA, BCrypt, SCrypt and PBKDF2WithHmacSHA1 algorithms.
  • Please remember that once this password hash is generated and stored in database, you can not convert it back to original password. Each time user login into application, you have to regenerate password hash again, and match with hash stored in database. So, if user forgot his/her password, you will have to send him a temporary password and ask him to change it with his new password. Well, it’s common trend now-a-days.

Let's start building simple Spring Boot application with Digest Authentication using Spring Security.

Adding dependencies in pom.xml

We will use spring-boot-starter-security as maven dependency for Spring Security.


Digest related Java Configuration

DigestAuthenticationFilter digestFilter(DigestAuthenticationEntryPoint digestAuthenticationEntryPoint, UserCache digestUserCache, UserDetailsService userDetailsService) {
  DigestAuthenticationFilter filter = new DigestAuthenticationFilter();
  return filter;
UserCache digestUserCache() throws Exception {
  return new SpringCacheBasedUserCache(new ConcurrentMapCache("digestUserCache"));
DigestAuthenticationEntryPoint digestAuthenticationEntry() {
  DigestAuthenticationEntryPoint digestAuthenticationEntry = new DigestAuthenticationEntryPoint();
  return digestAuthenticationEntry;

You need to register DigestAuthenticationFilter in your spring context. DigestAuthenticationFilter requires DigestAuthenticationEntryPoint and UserDetailsService to authenticate user.

The purpose of the DigestAuthenticationEntryPoint is to send the valid nonce back to the user if authentication fails or to enforce the authentication.

The purpose of UserDetailsService is to provide UserDetails like password and list of role for that user. UserDetailsService is an interface. I have implemented it with DummyUserDetailsService which loads every passed userName's details. But, you can restrict it to some few user or make it Database backed. One thing to remember is the password passed need to be in plain text format here. You can also use InMemoryUserDetailsManager for storing handful of user configured either through Java configuration or with xml based configuration which could access your application.

In the example, I also have used the caching for UserDetails. I have used SpringBasedUserCache and underlying cache is ConcurrentMapCache. You can use any other caching solution.

Running the example

You can download the example code from Github. I will be using Postman to run the example. Here are the few steps you need to follow.

1. Open postman and enter url (localhost:8082).

2. Click on Authorization tab below the url and select Digest Auth from Type dropdown.

3. Enter username(gaurav), realm(GAURAVBYTES.COM), password(pwd), algorithm(MD5) and leave nonce as empty. Click Send button.

4. You will get 401 unauthorized as response like below.

5. If you see the Headers from the response, you will see "WWW-Authenticate" header. Copy the value of nonce field and enter in the nonce textfield.

6. Click on Send Button. Voila!!! You got the valid response.

This is how we implement Digest Authentication with Spring Security. I hope you find this post informative and helpful.

In this post we will discuss about Basic Authentication and how to use it using Spring Security.

BASIC Authentication

  • It’s simplest of all techniques and probably most used as well. You use login/password forms – it’s basic authentication only. You input your username and password and submit the form to server, and application identify you as a user – you are allowed to use the system – else you get error.
  • The main problem with this security implementation is that credentials are propagated in a plain way from the client to the server. Credentials are merely encoded with Base64 in transit, but not encrypted or hashed in any way. This way, any sniffer could read the sent packages over the network.
  • HTTPS is, therefore, typically preferred over or used in conjunction with Basic Authentication which makes the conversation with the web server entirely encrypted. The best part is that nobody can even guess from the outside that Basic Auth is taking place.

Let's create a simple Spring Boot application which Basic Authentication enabled. You can read my previous post on how to create Simple Spring Boot application, if not familiar with it.

Add dependencies in pom.xml

We will add spring-boot-starter-security dependency to the pom.xml


Configurations for Basic Authentication

We need to register BasicAuthenticationFilter and BasicAuthenticationEntryPoint as bean in the Spring context.

BasicAuthenticationFilter basicAuthFilter(AuthenticationManager authenticationManager, BasicAuthenticationEntryPoint basicAuthEntryPoint) {
  return new BasicAuthenticationFilter(authenticationManager, basicAuthEntryPoint());
BasicAuthenticationEntryPoint basicAuthEntryPoint() {
  BasicAuthenticationEntryPoint bauth = new BasicAuthenticationEntryPoint();
  return bauth;

Enabling basic authentication and configuring properties

Basic Authenication is by default enabled when you add spring-security in your classpath. You need to configure the username and password for basic authentication. Here are some of the security properties. You can see SecurityProperties for other properties that you can configure like realm name etc.

    enabled: true
    name: gaurav
    password: bytes

XML based configuration for Basic Authentication

<beans:beans xmlns=""
    xmlns:beans="" xmlns:xsi=""

        <intercept-url pattern="/*" access="ROLE_USER" />
        <!-- Adds Support for basic authentication -->
                <user name="gaurav" password="bytes" authorities="ROLE_USER" />

This is how to enable basic authentication in Spring Boot application using Spring Security. You can get the full working example code for basic authentication on Github.

In this post, we will create a Restful web-services which will use JPA to persist the data in the embedded database(h2). Also, you can read more on Restful web-services.

Adding pom.xml dependencies

We will add spring-boot-starter-jpa to manage dependencies. We will use h2 embedded database server for persistence.


Creating entities

We have three entities in the example project viz. Product, Rating, User.

@Table(name = "product_ratings", schema = "product")
public class Rating {
  private Long ratingId;
  private double rating;
  private String productId;
  private String userId;
  public Rating() {
  public Rating(Long ratingId, double rating, String productId, String userId) {
    this.ratingId = ratingId;
    this.rating = rating;
    this.productId = productId;
    this.userId = userId;
  //getters, setters, toString, hashCode, equals

@Entity annotation specifies that this is an entity class. @Table annotation specifies the primary table for an entity class. You can configure the table_name and schema using this annotation for the entity class. @Id specifies that this field is the primary key of the entity. @GeneratedValue specifies how primary key will be generated. @Column is used to specify the mapped column for the property or field. You can also configure if the property is unique, nullable, length, precision, scale and/or if you want to insert or update it in the table.

Creating Repositories

You can extend the JpaRepository, CrudRepository interface to create your repository.

public interface ProductRepository extends JpaRepository<Product, String> {


Here, I created a ProductRepository interface which extends JpaRepository interface. You may wonder that instead of writing a repository class, we have created an interface and where will this get the implementation? The simple answer is SimpleJpaRepository class. A Proxy is generated by Spring and all the request is catered by the SimpleJpaRepository.

This contains all the basic methods like find, delete, save, findAll and few sort related/ criteria based search methods. Could be a case that you need to write your own specific method and in my case finding all the ratings of product. This could be done as follows.

public interface RatingRepository extends JpaRepository<Rating, Long> {
  public Iterable<Rating> getRatingsByProductId(final String productId);

@EnableJpaRepositories annotation

This annotation will enable JPA repositories. This will scan for Spring Data repositories in annotated configuration class by default. You can also change the basePackages to scan in this annotation.

public class App {
  public static void main(String[] args) {, args);

In our example, we have used this annotation in our App class, so it will scan all the packages in and under com.gauravbytes.gkart

These are the few steps to create a simple JPA project. You can get the full code on Github.

Few important points

If you are using embedded server in the above example, then you may need to set the following configurations.

  • Adding schema.sql in the classpath, if you are using schema in your tables(entity classes). You can get sample here.
  • You can change the datasource name(by default testdb) and other properties. See org.springframework.boot.autoconfigure.jdbc.DataSourceProperties for full list of properties that you can configure.

In the previous posts, we have created a Spring Boot QuickStart, customized the embedded server and properties and running specific code after spring boot application starts.

Now in this post, we will create Restful webservices with Jersey deployed on Undertow as a Spring Boot Application.

Adding dependencies in pom.xml

We will add spring-boot-starter-parent as parent of our maven based project. The added benefit of this is version management for spring dependencies.


Adding spring-boot-starter-jersey dependency

This will add/ configure the jersey related dependencies.


Adding spring-boot-starter-undertow dependency


These are all the necessary spring-boot-starters we require to create Restful webservices with Jersey.

Creating a Root resource/ Controller class

What are Root resource classes?

Root resource classes are POJOs that are either annotated with @Path or have at least one method annotated with @Path or a request method designator, such as @GET, @PUT, @POST, or @DELETE.

public class BookController {
  private BookService bookService;

  public BookController(BookService bookService) {
    this.bookService = bookService;

  public Collection getAllBooks() {
    return bookService.getAllBooks();

  public Book getBook(@PathParam("oid") String oid) {
    return bookService.getBook(oid);

  public Response addBook(Book book) {
    return Response.created(URI.create("/" + book.getOid())).build();

  public Response updateBook(@PathParam("oid") String oid, Book book) {
    bookService.updateBook(oid, book);
    return Response.noContent().build();

  public Response deleteBook(@PathParam("oid") String oid) {
    return Response.ok().build();

We have created a BookController class and used JAX-RS annotations.

  • @Path is used to identify the URI path (relative) that a resource class or class method will serve requests for.
  • @PathParam is used to bind the value of a URI template parameter or a path segment containing the template parameter to a resource method parameter, resource class field, or resource class bean property. The value is URL decoded unless this is disabled using the @Encoded annotation.
  • @GET indicates that annotated method handles HTTP GET requests.
  • @POST indicates that annotated method handles HTTP POST requests.
  • @PUT indicates that annotated method handles HTTP PUT requests.
  • @DELETE indicates that annotated method handles HTTP DELETE requests.
  • @Produces defines a media-type that the resource method can produce.
  • @Consumes defines a media-type that the resource method can accept.

You might have noticed that we have annotated BookController with @Component which is Spring's annotation and register it as bean. We have done so to benefit Spring's DI for injecting BookService service class.

Creating a JerseyConfiguration class

public class JerseyConfiguration extends ResourceConfig {
  public JerseyConfiguration() {
  public void setUp() {

We created a JerseyConfiguration class which extends the ResourceConfig from package org.glassfish.jersey.server which configures the web application. In the setUp(), we registered BookController and GenericExceptionMapper.

@ApplicationPath identifies the application path that serves as the base URI for all the resources.

Registering exception mappers

Could there be a case that some exceptions occurs in the resource methods (Runtime/ Checked). You can write your own custom exception mappers to map Java exceptions to

public class GenericExceptionMapper implements ExceptionMapper {

  public Response toResponse(Throwable exception) {
    return Response.serverError().entity(exception.getMessage()).build();

We have created a generic exception handler by catching Throwable. Ideally, you should write finer-grained exception mapper.

What is @Provider annotation?

It marks an implementation of an extension interface that should be discoverable by JAX-RS runtime during a provider scanning phase.

We have also created service BookService, model Book also. You can grab the full code from Githib.

Running the application

You can use maven to directly run it with mvn spring-boot:run command or can create a jar and run it.

Testing the rest endpoints

I have used PostMan extension available in chrome brower to test rest services. You can use any package/ API/ software to test it.

This is how we create Restful web-services with Jersey in conjuction with Spring Boot. I hope you find this post informative and helpful to create your first but not last Restful web-service.

Spring Boot provides two interfaces CommandLineRunner and ApplicationRunner to run specific piece of code when application is fully started. These interfaces get called just before run() on SpringApplication completes.


This interface provides access to application arguments as string array. Let's see the example code for more clarity.

public class CommandLineAppStartupRunner implements CommandLineRunner {
  private static final Logger logger = LoggerFactory.getLogger(CommandLineAppStartupRunner.class);

  public void run(String... args) throws Exception {"Application started with command-line arguments: {} . \n To kill this application, press Ctrl + C.", Arrays.toString(args));


ApplicationRunner wraps the raw application arguments and exposes interface ApplicationArguments which have many convinent methods to get arguments like getOptionNames() return all the arguments names, getOptionValues() return the agrument value and raw source arguments with method getSourceArgs(). Let's see an example code this.

public class AppStartupRunner implements ApplicationRunner {
  private static final Logger logger = LoggerFactory.getLogger(AppStartupRunner.class);

  public void run(ApplicationArguments args) throws Exception {"Your application started with option names : {}", args.getOptionNames());

When to use it

When you want to execute some piece of code exactly before the application startup completes, you can use it. In one of our project, we used these to source data from other microservice via service discovery which was registered in consul.


You can register as many application/commandline runner as you want. You just need to register them as Bean in the application context and Spring Application will automatically picks them up. You can order them as well either by extending interface org.springframework.core.Ordered or by @Order annotation.

This is all about application/commandline runner. You can also see org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner in spring-batch which implements CommandLineRunner to register and start batch jobs at application startup. I hope you find this informative and helpful. You can grab the full example code on Github.

In the previous post, we have created a web-based Spring Boot application which uses Embedded Tomcat as the default server running on default port 8080. Spring Boot supports Tomcat, Undetow and Jetty as embedded servers. Now, we will change and/ or configure the default embedded server and common properties to all the available servers.

Spring Boot provides convenient way of configuring dependencies with its starters. For changing the embedded server, we will user its spring-boot-starter-undertow.

Adding dependencies


spring-boot-starter-web comes with Embedded Tomcat. We need to exclude this dependency.


This is all we need to do to change the embedded server. There are some generic properties which is applicable for every server and some server specific properties that we can tweak to improve the preformance. Let's change some of the server properties.

Changing the default server port

server.port property is used for configuring the port on our Spring Boot application should run.

Enabling compression on responses

You can enable to compression on response sent by server and can tweak the mimeTypes, minResponseSize for compression. By default, the compression is disabled. Default property value for mimeTypes is text/html, text/xml,text/plain,text/css,text/javascript,application/javascript. Default property value for minResponseSize is 2048 bytes.

Other server properties

You can also enable ssl, modify maxHttpPostSize, contextParameters, contextPath and other server related properties. To know more, see org.springframework.boot.autoconfigure.web.ServerProperties class.

Configuring sever-specific properties

You can also change embedded server specific properties. In our example, we have changed embedded server to Undertow and have tweaked its ioThreads and workerThreads properties.

A sample properties file which have above mentioned properties changes.

  port: 8082
    ioThreads: 15
    workerThreads: 150
      enabled: true
    enabled: true
    mimeTypes: text/xml, text/css, text/html, application/json
    minResponseSize: 4096

    name: gaurav-bytes-embedded-server-example

I hope this post is informative and helpful. You can grab the full example code on Github.

In this post, we will create a simple Spring Boot application which will run on embedded Apache Tomcat.

What is Spring Boot?

Spring Boot helps in creating stand-alone, production-grade application easily with minimum fuss. It is the opinionated view of Spring framework and other third party libraries which believes in convenient configuration based setup.

Let's start building Spring Boot Application.

Adding dependencies in pom.xml

We will first add spring-boot-starter-parent as parent of our maven based project.


The benefit of adding spring-boot-starter-parent is that version managing of dependency is easy. You can omit the required version on the dependency. It will pick the one configured the parent pom or from starters pom. Also, it conveniently setup the build related configurations as well.

Adding spring-boot-starter-web dependency

This will configure/ add all the required dependencies for spring-web module.


Writing App class

public class App {
  public static void main(String[] args) {, args);

@SpringBootApplication indicates that class is configuration class and also trigger the auto-configure through @EnableAutoConfiguration and component scanning through @ComponentScan annotation in it.


It enables the auto-configuration of Spring Application Context. It attempts to configuration your application as per the classpath dependencies that you have added.

In the main() of App class, we have delegated the call to run() method of SpringApplication. SpringApplication will bootstrap and auto-configure our application and in our case will start the embedded tomcat server. In run method, we have passed App.class as argument which tells Spring that this is our primary spring component (helps in bootstrapping).

Writing HelloGbController

public class HelloGbController {
  public String helloGb() {
    return "Gaurav Bytes says, \"Hello There!!!\"";

I have used two annotations @RestController and @GetMapping. You can read more on new annotation introduced by Spring here.

@RestController signifies that this class is web @Controller and spring will consider it to handle incoming web requests.

Running the application

You can use maven command mvn spring-boot:run to run it as Spring Boot application and when you hit the localhost:8080 on your web browser, you will see the below web page.

Creating a jar for spring boot application

You need to add spring-boot-maven-plugin plugin to your build configuration in pom.xml and then you can create a jar with maven command mvn repackage and simply run it as jar with command java -jar spring-boot-quickstart-0.0.1-SNAPSHOT.jar.


This is how you can build a simple spring boot application. I hope you find this post helpful. You can download the example code from Github.

In this post, we will discuss about various bean scopes and their differences.

Bean scopes

There are seven bean scopes Spring supports out of which five are only available if your ApplicationContext is web-aware.

# Scope Explanation
1 singleton There will be single object of the bean per Spring IoC Container (Default).
2 prototype Scope beans to any number of object instances. Every time you get object of prototype bean from context, it will be brand new.
3 request Scope of the bean definition mapped to the lifecycle of HTTP Request. This is only available web-aware ApplicationContext.
4 session Scope of the bean definition mapped to the lifecycle of HTTP session. This is only available to web-aware ApplicationContext.
5 globalSession Scope of the bean definition mapped to the lifecycle of HTTP session usually used within Portlet context. This is only available to web-aware ApplicationContext.
6 application Scope of the bean definition mapped to the ServletContext. This is only available to web-aware ApplicationContext.
7 websocket Scope of the bean mapped to the lifecycle of Websocket. This is only available to web-aware ApplicationContext.

Singleton Vs Prototype

Let's see a example which shows the difference between Singleton and Prototype scope for bean.

public class Dictionary {
  private List words;
  public Dictionary() {
    words = new ArrayList<>();
  public void addWord(String word) {
  public int totalWords() {
    return this.words.size();
  public String toString() {
    return words.toString();

We first defined a class Dictionary.

Singleton scope

There will be only one shared instance of singleton bean per context and all request for that bean definition will end up returning the same object by the container.

public class ScopeConfig {
  @Bean(name = "singletonDictionary")
  //you can omit the scope by default it is singleton
  Dictionary singletonDictionary() {
    return new Dictionary();

We created a configuration class ScopeConfig. We created a bean Dictionary. @Scope annotation is used to mark the scope of the bean to singleton. If we don't define any scope then by default it is considered singleton scoped bean.

public class App {
  private static final Logger logger = Logger.getLogger(App.class.getName());
  public static void main(String[] args) {
    try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(ScopeConfig.class);) {
      Dictionary singletonDictionary = context.getBean("singletonDictionary", Dictionary.class);"Singleton Scope example starts");
      int totalWords = singletonDictionary.totalWords();"Need to have two words. Total words are : " + totalWords);;
      singletonDictionary = context.getBean("singletonDictionary", Dictionary.class);"Need to have two words. Total words are : " + totalWords);;"Singleton Scope example ends");

When we run above snippet, it will generate output like below.

Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Singleton Scope example starts
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Need to have two words. Total words are : 2
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: [Give, Take]
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Need to have two words. Total words are : 2
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: [Give, Take]
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Singleton Scope example ends

From output, we can analyse that when we got object of singletonDictionary again from context, it contained the previous added values.

Prototype scope

Prototype scope of bean results in the creation of a new bean instance every time a request for that specific bean is made.

public class ScopeConfig {
  @Bean(name = "prototypeDictionary")
  Dictionary prototypeDictionary() {
    return new Dictionary();

We created a configuration class ScopeConfig. We defined a bean prototypeDictionary. We used @Scope annotation to mark its scope as prototype.

public class App {
  private static final Logger logger = Logger.getLogger(App.class.getName());
  public static void main(String[] args) {
    try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(ScopeConfig.class);) {
      Dictionary prototypeDictionary = context.getBean("prototypeDictionary", Dictionary.class);"Prototype scope example starts");
      prototypeDictionary.addWord("Give 2");
      prototypeDictionary.addWord("Take 2");"Need to have two words. Total words are: " + prototypeDictionary.totalWords());;
      prototypeDictionary = context.getBean("prototypeDictionary", Dictionary.class);"zero word count. Total words are: " + prototypeDictionary.totalWords());;

The above code snippet generated below output.

Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Prototype scope example starts
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: Need to have two words. Total words are: 2
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: [Give 2, Take 2]
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: zero word count. Total words are: 0
Feb 12, 2017 11:50:18 PM com.gauravbytes.springbeanscope.App main
INFO: []

From the output logs, you can clearly see that when we got prototypeDictionary object again from context then it returned a new object and there was no previously added words in it.

When to use Singleton and Prototype

Use prototype scope for all stateful beans and singleton scope for stateless beans.

This is all about bean scopes. I hope you find this post informative. You can find the example code on Github.

What is Dependency Injection?

Dependency injection is a process in which objects define their dependencies i.e. other objects they require to work, through a constructor, setter methods, factory methods. The container responsibility is to inject those while creating beans. With Dependency inject in place, we have cleaner code and clear way of decoupling. There are two prominent variants of Dependency Injection.

  • Constructor based Dependency Injection
  • Setter based Dependency Injection

Constructor based Dependency Injection

When you express your dependencies through constructor arguments and your container invoke your constructor with number of arguments, type of arguments expected by the constructor. Let's jump to one quick example.

public class ConstructorBasedFileParser {
  private Parser parser;

  public ConstructorBasedFileParser(Parser parser) {
    this.parser = parser;

  public void setParser(Parser parser) {
    this.parser = parser;

  public void parseFile(File file) {
    if (parser.canParse(file)) {

In the above code snippet, ConstructorBasedFileParser is a component which express its dependency on Parser through constructor using @Autowired annotation.

Configuration class for the above code snippet looks like this.

@Import(value = ParserConfig.class)
@ComponentScan(basePackages = "com.gauravbytes.di.parser.constructor")
public class ConstructorBasedDIConfig {


@Configuration declares it as Spring Configuration file. @ComponentScan is used along with Configuration classes to scan for components. @Import imports the one or more Configuration classes. It is equivalent to <import/>.

Setter based Dependency Injection

Setter based dependency injection is accomplished by calling setter methods on beans after invoking no-args constructor by the container. Let's jump to example to see how to use setter method dependency injection.

public class SetterBasedFileParser {
  private Parser parser;

  public SetterBasedFileParser() {

  public void setParser(Parser parser) {
    this.parser = parser;

  public void parseFile(File file) {
    if (parser.canParse(file)) {

In above code snippet, SetterBasedFileParser is a component class which expresses its dependency through setter method setParser() using @Autowired annotation.

When to use Constructor-based vs Setter-based DI?

Per se Spring documentation, use constructor-based DI for mandatory dependencies and setter-based DI for optional dependencies. It is advisable to use constructor-based DI. It makes your classes as Immutable object and also ensured that required dependencies are met before constructing that bean. Also, if you want to reconfigure your bean, then use setter-based DI.

Circular dependencies

There could be a case when your open bean say A is dependent on B and B is dependent on bean A (Both expressing dependencies through constructor). Spring IoC container will detect this at runtime and will throw BeanCurrentlyInCreationException.

Possible solution is to use setter based injection in some of beans.

I hope you find this post useful. You can grab the full example code used on Github.

In this post, we will learn about @Import annotation and its usage. You can see my previous post on how to create a simple spring core project.

What is @Import annotation and usage?

@Import annotation is equivalent to <import/> element in Spring XML configuration. It helps in splitting the single Java based configuration file into small, modular, maintainable and component based configuration. Let's see it with example.

@Import(value = { DBConfig.class, WelcomeGbConfig.class })
public class HelloGbAppConfig {


In above code snippet, we are importing two different configuration files viz. DBConfig, WelcomeGbConfig in application level configuration file HelloGbAppConfig.

The above code is equivalent to Spring XML based configuration below.

<beans xmlns=""

  <import resource="config/welcomegbconfig.xml"/>
  <import resource="config/dbconfig.xml"/>


You can see the full example code for Java based configuration on Github.

In this post, we will create a spring context and will register bean via Java configuration file. You can see my previous post on how to create a simple spring core project.

What is @Configuration annotation?

@Configuration annotation indicates that there is one or more bean methods and spring containers can process to generate bean definitions at runtime. Also, @Bean annotation is used at method level to signifies that this will be registered as bean in spring context. Let's create a quick configuration class.

public class WelcomeGbConfig {

  GreetingService greetingService() {
    return new GreetingService();

Now, we will create spring context as follows.

// using try with resources so that this context closes automatically
try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(
      WelcomeGbConfig.class);) {
  GreetingService greetingService = context.getBean(GreetingService.class);

1) we created the spring context.
2) We got the bean from context.
3. We call greet() on bean object.

This is how you can use configuration file (Java based) to define bean and being processed by spring context. You can also find the full example code on Github.

In this post, we will create a Spring context and will get a bean object from it.

What is Spring context?

Spring context is also termed as Spring IoC container which is responsible for instantiate, configure and assemble the beans by reading configuration meta data from XML, Java annotations and/ or Java code in configuration files.

Technologies used

Spring 4.3.6.RELEASE, Maven Compiler 3.6.0 and Java 1.8

We will first create a simple maven project. You can select the maven-archtype-quickstart as archtype.

Adding dependencies in pom.xml

We will add spring-framework-bom in the dependency management.


The benefit of adding this are to manage the version of the added spring dependencies from one place. By this, you can omit mentioning version number for spring dependencies.


Now, we will create a class GreetingService which is eligible to get registered as bean in Spring context.

public class GreetingService {
  private static final Logger logger = Logger.getLogger(GreetingService.class.getName());

  public GreetingService() {


  public void greet() {"Gaurav Bytes welcomes you for your first tutorial on Spring!!!");

@Service annotation at class-level means that this is service and is eligible to be registered as bean in Spring context.

Instantiating a container

Now, we will create object of Spring context. We are using AnnotationConfigApplicationContext as spring container. Also, there exists other spring container like ClassPathXmlApplicationContext, GenericGroovyApplicationContext etc. which we will discuss in future posts.

ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(

As you see at the time of object contruction of AnnotationConfigApplicationContext, I am passing one string parameter. This parameter ( of varags type) is the basePackages which spring context will scan for bean registration.

Now, we will get object of bean by calling getBean() on spring context.

GreetingService greetingService = context.getBean(GreetingService.class);

At last, we are closing the spring container by calling close().

It is important to close the spring context(container) after use. By closing it, we ensure that it will release all the resources and locks that its implementation might hold and will also destroy all the cached singleton beans.

We have also included maven-compiler-plugin in pom.xml to compile the java sources with the configured java version (in our case it is Java 1.8).

You can also find the example code on Github.

This article is in continuation to my other posts on Functional Interfaces, static and default methods and Lambda expressions.

Method references are the special form of Lambda expression. When your lambda expression are doing nothing other than invoking existing behaviour (method), you can achieve same by referring it by name.

  • :: is used to refer to a method.
  • Method type arguments are infered by JRE at runtime from context it is defined.

Types of method references

  • Static method reference
  • Instance method reference of particular object
  • Instance method reference of an arbitrary object of particular type
  • Constructor reference

Static method reference

When you refer static method of Containing class. e.g. ClassName::someStaticMethodName

class MethodReferenceExample {
  public static int compareByAge(Employee first, Employee second) {
    return, second.age);

Comparator compareByAge = MethodReferenceExample::compareByAge;

Instance method reference of particular object

When you refer to the instance method of particular object e.g. containingObjectReference::someInstanceMethodName

static class MyComparator {
  public int compareByFirstName(User first, User second) {
    return first.getFirstName().compareTo(second.getFirstName());
  public int compareByLastName(User first, User second) {
    return first.getLastName().compareTo(second.getLastName());

private static void instanceMethodReference() {
  System.err.println("Instance method reference");
  List<User> users = Arrays.asList(new User("Gaurav", "Mazra"),
      new User("Arnav", "Singh"), new User("Daniel", "Verma"));
  MyComparator comparator = new MyComparator();
  Collections.sort(users, comparator::compareByFirstName);

Instance method reference of an arbitrary object of particular type

When you refer to instance method of some class with ClassName. e.g. ClassName::someInstanceMethod;

Comparator<String> stringIgnoreCase = String::compareToIgnoreCase;
//this is equivalent to
Comparator<String> stringComparator = (first, second) -> first.compareToIgnoreCase(second);

Constructor reference

When you refer to constructor of some class in lambda. e.g. ClassName::new

Function<String, Job> jobCreator = Job::new;
//the above function is equivalent to
Function<String, Job> jobCreator2 = (jobName) -> return new Job(jobName);

You can find the full example on github.

You can also view my other article on Java 8

In this post, we will cover following topics.

  • What are Streams?
  • What is a pipeline?
  • Key points to remember for Streams.
  • How to create Streams?

What are Streams?

Java 8 introduced new package which contains classes to perform SQL-like operations on elements. Stream is a sequence of elements on which you can perform aggregate operations (reduction, filtering, mapping, average, min, max etc.). It is not a data structure that stores elements like collection but carries values often lazily computed from source through pipeline.

What is a pipeline?

A pipeline is sequence of aggregate (reduction and terminal) operations on the source. It has following components.

  • A source: Collections, Generator Function, array, I/O channel etc.
  • zero or more intermediate operations: filter, map, sequential, sorted, distinct, limit, flatMap, parallel etc. Intermediate operations returns/produces stream.
  • a termination operation: forEach, reduction, noneMatch, allMatch, count, findFirst, findAny, min, max etc.

Key points to remember for Streams

  • No storage.
  • Functional in nature.
  • Laziness-seeking.
  • Possibly unbounded. Operations, for example, limit(n) or findFirst() can permit calculations on infinite streams to finish in finite time.
  • Consumable. The elements can be visited only once. To revisit, you need to create a new stream.

How to create Streams?

1. In Collection, you can create streams by calling stream(), parallelStream().

Collection<Person> persons = StreamSamples.getPersons();;

// parallel stream

2. From Stream interface, calling static factory method of() which takes varargs of T type.

Stream.of("This", "is", "how", "you", "create", "stream", "from", "static", "factory",
      "method").map(s -> s.concat(" ")).forEach(System.out::print);

3. From Arrays class, by calling stream() static method. String[] { "This", "is", "how", "you", "create", "stream", ".",
      "Above", "function", "use", "this" }).map(s -> s.concat(" "))

4. From Stream by calling iterate(). It is infinite stream function.

// iterate return infinite stream... beware of infinite streams
Stream.iterate(1, i -> i++).limit(10).forEach(System.out::print);

5. From IntStream by calling range.

int sumOfFirst10PositiveNumbers = IntStream.range(1, 10).reduce(0, Integer::sum);

6. From Random by calling ints(). It is infinite stream function.

// random.ints for random number
new Random().ints().limit(20).forEach(System.out::println);

7. From BufferedReader by calling lines(). Streams of file paths can be obtained by calling createDirectoryStream of Files class and some other classes like, etc.

try (BufferedReader br = new BufferedReader(new StringReader(myValue))) {
catch (IOException io) {
  System.err.println("Got this:>>>> " + io);

I hope the post is informative and helpful in understanding Streams. You can find the full example code on Github.

You can also read on Aggregate opeations on Stream.

This post is in continuation with my earlier posts on Streams. In this post we will discuss about aggregate operations on Streams.

Aggregate operations on Streams

You can perform intermediate and terminal operations on Streams. Intermediate operations result in a new stream and are lazily evaluated and will start when terminal operation is called. -> p.getGender() == Gender.MALE).forEach(System.out::println);

In the snippet above, filter() doesn't start filtering immediately but create a new stream. It will only start when terminal operation is called and in above case when forEach().

Intermediate operations

There are many intermediate operations that you can perform on Streams. Some of them are filter(), distinct(), sorted(), limit(), parallel(), sequential, map(), flatMap.

filter() operation

This takes Predicate functional interface as argument and the output stream of this operation will have only those elements which pass the conditional check of Predicate. You can learn a nice explanation on Predicates here.

// all the males
List<Person> allMales = -> p.getGender() == Gender.MALE).collect(Collectors.toList());

map() operation

It is a mapper operation. It expects Function functional interface as argument. Purpose of Function is to transform from one type to other (The other type could be same).

// first names of all the persons
List<String> firstNames =;


It returns the unique elements and uses equals() under the hood to remove duplicates.

List<String> uniqueFirstNames =;



Sorts the stream elements. It is stateful operation.

List<Person> sortedByAge =;

limit() will reduce the number of records. It is helpful to end infinite streams in a finite manner.

Intemediate operations can be divided to two parts stateless and stateful. Most of the streams intermediate operations are stateless e.g. map, filter, limit etc. but some of them are stateful e.g. distinct and sorted because they have to maintain the state of previously visited element.

Terminal/ Reduction operations

There are many terminal operations such as forEach(), reduction(), max(), min(), average(), collect(), findAny, findFirst(), allMatch(), noneMatch().


This takes Consumer functional interface as parameter and pass on the element for consumption.;

max(), min(), average() operations

average() returns OptionalDouble whereas max() and min() return OptionalInt.

//average age of all persons;

// max age from all persons;

// min age from all persons;

noneMatch(), allMatch(), anyMatch()

matches if certain condition satisfies by none, all and/or any elements of stream respectively.

//age of all females in the group is less than 22 -> p.getGender() == Gender.FEMALE).allMatch(p -> p.getAge() < 22);
//not a single male's age is greater than 30 -> p.getGender() == Gender.MALE).noneMatch(p -> p.getAge() > 30); -> p.getAge() > 45);

Reduction operations

Reduction operations are those which provide single value as result. We have seen in previous snippet some of the reduction operation which do this. E.g. max(), min(), average(), sum() etc. Apart from this, Java 8 provides two more general purpose operations reduce() and collect().


int sumOfFirst10 = IntStream.range(1, 10).reduce(0, Integer::sum);


It is a mutating reduction. Collectors has many useful collection methods like toList(), groupingBy(),

Collection<Person> persons = StreamSamples.getPersons();
List firstNameOfPersons =;

Map<Integer, List<Person>> personByAge =;

Double averageAge =;

Long totalPersons =;

IntSummaryStatistics personsAgeSummary =;


String allPersonsFirstName =, Collectors.joining("#")));

The result would look like this.

[Gaurav, Gaurav, Sandeep, Rami, Jiya, Rajesh, Rampal, Nisha, Neha, Ramesh, Parul, Sunil, Prekha, Neeraj]
{32=[Person [firstName=Rami, lastName=Aggarwal, gender=FEMALE, age=32, salary=12000]], 35=[Person [firstName=Rampal, lastName=Yadav, gender=MALE, age=35, salary=12000]], 20=[Person [firstName=Prekha, lastName=Verma, gender=FEMALE, age=20, salary=3600]], 21=[Person [firstName=Neha, lastName=Kapoor, gender=FEMALE, age=21, salary=5500]], 22=[Person [firstName=Jiya, lastName=Khan, gender=FEMALE, age=22, salary=4500], Person [firstName=Ramesh, lastName=Chander, gender=MALE, age=22, salary=2500]], 24=[Person [firstName=Sandeep, lastName=Shukla, gender=MALE, age=24, salary=5000]], 25=[Person [firstName=Parul, lastName=Mehta, gender=FEMALE, age=25, salary=8500], Person [firstName=Neeraj, lastName=Shah, gender=MALE, age=25, salary=33000]], 26=[Person [firstName=Nisha, lastName=Sharma, gender=FEMALE, age=26, salary=10000]], 27=[Person [firstName=Sunil, lastName=Kumar, gender=MALE, age=27, salary=6875]], 28=[Person [firstName=Gaurav, lastName=Mazra, gender=MALE, age=28, salary=10000], Person [firstName=Gaurav, lastName=Mazra, gender=MALE, age=28, salary=10000]], 45=[Person [firstName=Rajesh, lastName=Kumar, gender=MALE, age=45, salary=55000]]}
IntSummaryStatistics{count=14, sum=380, min=20, average=27.142857, max=45}

You can't consume same Streams twice

When the terminal operation is completed on stream, it is considered consumed and you can't use it again. You will end up with exception if you try to start new operations on already consumed stream.

Stream<String> stream =;
stream.reduce((a, b) -> a.length() > b.length() ? a : b).ifPresent(System.out::println);

// below line will throw the exception
Exception in thread "main" java.lang.IllegalStateException: stream has already been operated upon or closed


Streams provide a convenient way to execute operations in parallel. It uses ForkJoinPool under the hood to run stream operations in parallel. You can use parallelStream() or parallel() on already created stream to perform task parallelly. One thing to note parallelism is not automatically faster than running task in serial unless you have enough data and processor cores.

persons.parallelStream().filter(p -> p.getAge() > 30).collect(Collectors.toList());
Pass java.util.concurrent.ForkJoinPool.common.parallelism property while JVM startup to increase parallelism in fork-join pool.

Concurrent reductions

ConcurrentMap<Integer, List<Person>> personByAgeConcurrent =;
Prevent interference, side-effects and stateful lambda/functions.

Side effects

If the function is doing more than consuming and/ or returning value, like modifying the state is said to have side-effects. A common example of side-effect is forEach(), mutable reduction using collect(). Java handles side-effects in collect() in thread-safe manner.


You should avoid interference in your lambdas/ functions. It occurs when you modify the underlying collection while running pipeline operations.

Stateful Lambda expressions

A lambda expression is stateful if its result depends on any state which can alter/ change during execution. Avoid using stateful lambdas expressions. You can read more here.

I hope you find this post informative and helpful. You can find the example code for reduction, aggregate operation and stream creation on Github.

This post is in continuation to my previous posts on Apache Avro - Introduction, Apache Avro - Generating classes from Schema and Apache Avro - Serialization.

In this post, we will share insights on using Apache Avro as RPC framework.

We first need to define a protocol to use Apache Avro as RPC framework. Before going into depth of this topic, let's discuss What protocol is?

Avro protocols describes RPC interfaces. They are defined as JSON similar to Schema.

A protocol has following attributes

  • protocol: a string, defining name of the protocol.
  • namespace: an optional that qualifies the name.
  • types: an optional list of definitions of named types (like record, enum, fixed and errors).
  • messages: an optional JSON object whose keys are method names of protocoland whose values are objects whose attributes are described below. No two messages may have the same name.

Further, Message have following attributes

  • request: a list of named, typed parameter schemas.
  • response: a response schema.
  • error: an optional union of declared error schemas.

Let's define a simple protocol to exchange email message between client and server.

  "namespace": "com.gauravbytes.avro",
  "protocol": "EmailSender",
   "types": [{
     "name": "EmailMessage", "type": "record",
     "fields": [{
       "name": "to",
       "type": "string"
       "name": "from",
       "type": "string"
       "name": "body",
       "type": "string"
   "messages": {
     "send": {
       "request": [{"name": "email", "type": "EmailMessage"}],
       "response": "string"

Here, The protocol defines an interface EmailSender which takes an EmailMessage as request and return string response.

We have created a mock implementation of EmailSender

public class EmailSenderImpl implements EmailSender {
  public CharSequence send(EmailMessage email) throws AvroRemoteException {
    return email.toString();

Now, we create a server, Apache Avro uses Netty for the same.

server = new NettyServer(new SpecificResponder(EmailSender.class, new EmailSenderImpl()),
    new InetSocketAddress(65333));

Now, we create a client which sends request to the server.

NettyTransceiver client = new NettyTransceiver(new InetSocketAddress(65333));
// client code - attach to the server and send a message
EmailSender proxy = SpecificRequestor.getClient(EmailSender.class, client);"Client built, got proxy");

// fill in the Message record and send it
EmailMessage message = new EmailMessage();
message.setTo(new Utf8(args[0]));
message.setFrom(new Utf8(args[1]));
message.setBody(new Utf8(args[2]));"Calling proxy.send with message: {} ", message.toString());"Result: {}", proxy.send(message));
// cleanup

This is how we can use Apache Avro as RPC framework. I hope you found this article useful. You can download the full example code from Github.

In this post, we will cover following topics.

  • What are Lambda expressions?
  • Syntax for Lambda expression.
  • How to define no parameter Lambda expression?
  • How to define single/ multi parameter Lambda expression?
  • How to return value from Lambda expression?
  • Accessing local variables in Lambda expression.
  • Target typing in Lambda expression.

What are Lambda expressions?

Lambda expressions are the first step of Java towards functional programming. Lambda expressions enable us to treat functionality as method arguments, express instances of single-method classes more compactly.

Syntax for Lambda expression

Lambda has three parts:

  • comma separated list of formal parameters enclosed in parenthesis.
  • arrow token ->.
  • and, body of expression (which may or may not return value).

(param) -> { System.out.println(param); }
Lambda expression can only be used where the type they are matched are functional interfaces.

How to define no parameter Lambda expression?

If the lambda expression is matching against no parameter method, it can be written as:

() -> System.out.println("No paramter expression");

How to define single/ multi parameter Lambda expression?

If lambda expression is matching against method which take one or more parameter, it can be written as:

(param) -> System.out.println("Single param expression: " + param);

(paramX, paramY) -> System.out.println("Two param expression: " + paramX + ", " + paramX);

You can also define the type of parameter in Lambda expression.

(Employee e) -> System.out.println(e);

How to return value from Lambda expression?

You can return value from lambda just like a method did.

(param) -> {
  // perform some steps
  return "some value";

In case lambda is performing single step and returning value. Then you can write it as:

(int a, int b) -> return, b);

// or simply lambda will automatically figure to return this value
(int a, int b) ->, b);

Accessing local variables in Lambda expression

Lambda can access the final or effectively final variables of the method in which they are defined. They can also access the instance variables of enclosing class.

Target typing in Lambda expression

You might have seen in earlier code snippets that we have omitted the type of parameter, return value and the type of Lambda. Java compiler determines the target type from the context lambda is defined.

Compiler checks three things:

  • Is the target type functional interface?
  • Is list of parameter and its type matched with the single method?
  • Does the return type matched with the single method return type?

Now, Let's jump to an example to verify it.

interface InterfaceA {
  void doWork();

interface InterfaceB<T> {
  T doWork();

class LambdaTypeCheck {
  public static void main (String[] args) {
    LambdaTypeCheck typeCheck = new LambdaTypeCheck();
    typeCheck.invoke(() -> "I am done with you");
  public <T> T invoke (InterfaceB<T> task) {
    return task.doWork();

  public void invoke (InterfaceA task) {
When you call typeCheck.invoke(() -> "I am done with you"); then invoke(InterfaceB<T> task) will be called. Because the lambda return value which is matched by InterfaceB<T>.

Java 8 reincarnated SAM interfaces and termed them Functional interfaces. Functional interfaces have single abstract method and are eligible to be represented with Lambda expression. @FunctionalInterface annotation is introduced in Java 8 to mark an interface as functional. It ensures at compile-time that it has only single abstract method, otherwise it will throw compilation error.

Let's define a functional interface.

public interface Spec<T> {
  boolean isSatisfiedBy(T t);

Functional interfaces can have default and static methods in them and still remains functional interface.

public interface Spec<T> {
  boolean isSatisfiedBy(T t);
  default Spec<T> not() {
    return (t) -> !isSatisfiedBy(t);
  default Spec<T> and(Spec<T> other) {
    return (t) -> isSatisfiedBy(t) && other.isSatisfiedBy(t);
  default Spec<T> or(Spec<T> other) {
    return (t) -> isSatisfiedBy(t) || other.isSatisfiedBy(t);
If an interface declares an abstract method overriding one of the public methods of java.lang.Object, that also does not count toward the interface's abstract method count since any implementation of the interface will have an implementation from java.lang.Object or elsewhere.

I did a comparison of Java default serialization and Apache Avro serialization of data and results were very astonishing.

You can read my older posts for Java serialization process and Apache Avro Serialization.

Apache Avro consumed 15-20 times less memory to store the serialized data. I created a class with three fields (two String and one enum and serialized them with Avro and Java.

The memory used by Avro is 14 bytes and Java used 231 bytes (length of byte[])

Reason for generating less bytes by Avro

Java Serialization

The default serialization mechanism for an object writes the class of the object, the class signature, and the values of all non-transient and non-static fields. References to other objects (except in transient or static fields) cause those objects to be written also. Multiple references to a single object are encoded using a reference sharing mechanism so that graphs of objects can be restored to the same shape as when the original was written.

Apache Avro

writes only the schema as String and data of class being serialized. There is no per field overhead of writing the class of the object, the class signature as in Java. Also, the fields are serialized in pre-determined order.

You can find the full Java example on github.

Avro can't handle circular references and throw java.lang.StackOverflowError whereas Java's default serialization can handle it. (example code for Avro and example code for Java serialization) Another observation is that Avro have no direct way of defining inheritance in the Schema (Classes) but Java's default serialization support inheritance with its own constraints like super class either need to implements Serializable interface or have default no-args constructor accessible till top hierarchy (otherwise will throw

You can also view my other posts on Avro.

This post is in continuation with my earlier posts on Apache Avro - Introduction and Apache Avro - Generating classes from Schema.

In this post, we will discuss about reading (deserialization) and writing(serialization) of Avro generated classes.

"Apache Avro™ is a data serialization system." We use DatumReader<T> and DatumWriter<T> for de-serialization and serialization of data, respectively.

Apache Avro formats

Apache Avro supports two formats, JSON and Binary.

Let's move to an example using JSON format.

Employee employee = Employee.newBuilder().setFirstName("Gaurav").setLastName("Mazra").setSex(SEX.MALE).build();

DatumWriter<Employee> employeeWriter = new SpecificDatumWriter<>(Employee.class);
byte[] data;
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
  Encoder jsonEncoder = EncoderFactory.get().jsonEncoder(Employee.getClassSchema(), baos);
  employeeWriter.write(employee, jsonEncoder);
  data = baos.toByteArray();
// serialized data
System.out.println(new String(data));
DatumReader<Employee> employeeReader = new SpecificDatumReader<>(Employee.class);
Decoder decoder = DecoderFactory.get().jsonDecoder(Employee.getClassSchema(), new String(data));
employee =, decoder);
//data after deserialization

Explanation on the way :)

Line 1: We create an object of class Employee (AVRO generated)

Line 3: We create an object of SpecificDatumWriter<T> which implements DatumWriter<T> Also, there exists other implementation of DatumWriter viz. GenericDatumWriter and ReflectDatumWriter.

Line 6: We create JsonEncoder by passing Schema and OutputStream where we want the serialized data and In our case, it is in-memory ByteArrayOutputStream.

Line 7: We call #write method on DatumWriter with Object and Encoder.

Line 8: We flushed the JsonEncoder. Internally, it flushes the OutputStream passed to JsonEncoder.

Line 15: We created object of SpecificDatumReader<T> which implements DatumReader<T>. Also, there exists other implementation of DatumReader viz. GenericDatumReader and ReflectDatumReader.

Line 16: We create JsonDecoder passing Schema and input String which will be deserialized.

Let's move to serialization and de-serialization example with Binary format.

Employee employee = Employee.newBuilder().setFirstName("Gaurav").setLastName("Mazra").setSex(SEX.MALE).build();

DatumWriter<Employee> employeeWriter = new SpecificDatumWriter<>(Employee.class);
byte[] data;
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
  Encoder binaryEncoder = EncoderFactory.get().binaryEncoder(baos, null);
  employeeWriter.write(employee, binaryEncoder);
  data = baos.toByteArray();
// serialized data
DatumReader<Employee> employeeReader = new SpecificDatumReader<>(Employee.class);
Decoder binaryDecoder = DecoderFactory.get().binaryDecoder(data, null);
employee =, decoder);
//data after deserialization

All the example is same except Line 6 and Line 16 where we are creating an object of BinaryEncoder and BinaryDecoder.

This is how to we can serialize and deserialize data with Apache Avro. I hope you found this article informative and useful. You can find the full example on github.