gauravbytes.com is now codefoundry.dev.
Showing posts sorted by date for query java. Sort by relevance Show all posts

In the previous post, we created a BlogPost application with React and Redux and managed global state with Redux. We will extend the same application and will introduce Next.js for server-side rendering. The bigger benefit of using Next.js is pre-rendering of the page along with automatic code-splitting, static site export, CSS-in-JS.

Next.js functions

Next.js exposes three functions for data fetching and these are getStaticProps, getStaticPaths and getServerSideProps. First two functions are used for Static generation and the last function getServerSideProps is used for Server-side rendering. Static generation means the HTML is generated at the build(project build) time whereas in Server-side rendering HTML is generated at each request.

Adding required libraries

Run npm i --save next @types/next from root of the project to add the required libraries for this example.

Update following commands under scripts in package.json.

"dev": "next dev",
"start": "next start",
"build": "next build"

Next.js is built around the concept of pages. A page is nothing but a React component exported in pages directory. For example; pages/welcome.tsx will be mapped to /welcome. You also have an option to have Dynamic routes.

Basic setup

Let's start with creating _document.tsx and _app.tsx under src/pages directory. In _document.tsx, you defines your skeleton structure of your html page generated by Next.js. You usually do this to add meta tags or loading scripts or styles from CDN. Consider _app.tsx as the root component of your React application. In our example, we provide redux store to the Provider in this component.

// _document.tsx


import React from 'react';
import Document, { Html, Head, Main, NextScript } from 'next/document'

export default class MyDocument extends Document {
  render() {
    return (
    <Html lang="en">
      <Head>
        <meta content='https://www.codefoundry.dev/' property='og:url'/>
        <meta content='Tutorials for Java, Java 8, Spring, Spring Cloud, Spring Boot, React JS, Redux, Next.JS' property='og:description'/>
        <meta content='Gaurav Rai Mazra' name='Author'/>
        <meta content='https://www.codefoundry.dev/' property='og:url'/>
        <meta content='https://www.codefoundry.dev/favicon.ico' property='og:image'/>
      </Head>
      <body>
        <Main />
        <NextScript />
      </body>
    </Html>
    )
  }
}
// _app.tsx


import React from 'react';
import { AppProps } from 'next/app';
import { Provider } from 'react-redux';
import { store } from '../redux/store';

function MyApp({ Component, pageProps }: AppProps) {
  return (
    <React.StrictMode>
      <Provider store={store}>
        <Component {...pageProps} />
      </Provider>
    </React.StrictMode>
  )
}

export default MyApp;

Creating first page

Let's create our first page index.tsx under src/pages directory.

/* Line 1 */
interface IServerProps {
  bloggerPosts: {
    allTags: string[]
    posts: IBlogPost[]
  }
}

export default (props: IServerProps) => {
  /* Line 2 */ const dispatch = useDispatch();
  useEffect(() => {
    /* Line 3 */ dispatch(setPostsAsync(props.bloggerPosts));
  }, [dispatch, props.bloggerPosts])
  return (<App />)
}

/* Line 4 */ export const getServerSideProps: GetServerSideProps = async(context: GetServerSidePropsContext<any>) => {
  /* Line 5 */const bloggerPosts = await BloggerService.getAllPosts();
  return {
    props: {
      bloggerPosts
    }
  }
} 

At Line 1, we have defined type of prop field of this functional component. At line 2, We are using useDispatch hook from redux to get reference of dispatch function. Inside useEffect hook, at line 3, we are dispatching the bloggerPosts that were computed on server-side by Next.js(Line 4).

At Line 4, we are defining getServerSideProps function which gets executed on every request by Next.js on the server-side and the result is passed onto this functional component.

At Line 5, we are calling BloggerService's getAllPosts function which is retrieving the posts from blogger(http://codefoundry.dev)'s feed. Let's create this service(BloggerService.ts) as well under src/service.

/* Line 1 */ declare type BloggerEntry = {
  id: {
    $t: string
  },
  updated: {
    $t: string
  },
  published: {
    $t: string
  },
  category: Array<{scheme: string, term: string}>,
  title: {
    $t: string
  },
  summary: {
    $t: string
  },
  author: Array<{name: { $t: string }}>,
  link: Array<{ rel: string, href: string }>
}

const getAllPosts = async() => {
  /* Line 2 */ const response = await fetch('https://www.blogger.com/feeds/5554118637855932326/posts/summary?alt=json&start-index=1&max-results=100')
  const result = await response.json();
  const categories = result?.feed?.category ?? [];
  const allTags = (categories as Array<{term: string}>).map(category => category.term)
  const entries = result?.feed?.entry ?? [];
  const posts = (entries as Array<BloggerEntry>).map(entry => {
    const id = entry.id.$t;
    const datePublishedOrUpdated = entry.updated.$t || entry.published.$t;
    const tags = entry.category.map(cat => cat.term);
    const title = entry.title.$t;
    const content = entry.summary.$t;
    const author = entry.author.map(a => a.name.$t).join(', ')
    const postLink = entry.link.find(l => l.rel === 'alternate');
    const postUrl = !!postLink ? postLink.href : '';

    /* Line 3 */ return {
      id,
      tags,
      title,
      content,
      author,
      postUrl,
      postedOn: datePublishedOrUpdated
    }
  })
  return { allTags, posts };
}

export default { getAllPosts }

At Line 1, we declared a type BlogEntry which refers to entry sent by the blogger's feed. At Line 2, we are using fetch api to retrieve summary feed from blogger(http://codefoundry.dev) and we are transforming and returning it to the type that our reducer store understands (At Line 3).

Cleanup App.tsx and BlogPosts.tsx

Earlier, we hard-coded posts(POSTS array) in App.tsx and were passing to BlogPosts component. Let's clean it up.

// App.tsx
function App() {
  return (
    <>
      <div className={styles['App-Container']}>
        <BlogPosts />
      </div>
    </>
  );
}

export default App;
// BlogPosts.tsx
function BlogPosts() {
  return (
    <div className={styles["blog-container"]}>
      <BlogPost/>
      <BlogListing/>
    </div>
  );
}

Let's run the application with command npm run dev.

That's it :). You can download the full code from github.

Recap

In this post, we first added required set of libraries (next and @types/next). Then, we added scripts to build, run and start project with next.js. Then, we did basic setup for next.js application e.g. Setting up _document.tsx and _app.tsx. Then, we created our first page index.tsx and created getServerSideProps method for server-side rendering. At last, we cleaned up App.tsx and BlogPosts.tsx file and ran the application.

What's next?

In the next post, we will use Next.js to generate static site along with Dynamic routing in static site. So, stay tuned!

Introduction

React was first introducted to general public in May 2013; roughly three years after the first release of Angular JS (October 2010). Soon, it picked up the momentum and now is the highest stared(~150K) and forked(29.2K) repository on Github. The positive point of React with its contemporary libraries was the backward compatibility in all the released versions. It started as a class based library (extending React.Component) to pure functional library with React Hooks; still keeping backward compatibility. Now, new features include asyncronous rendering with Suspense. React's ecosystem is very vast with lots of frameworks available to choose from. We will start with building first simple a.ka. Welcome react application and then build full-stack application with React, Redux, reselect, Next.JS, express JS and Node.JS. So, stay tuned :)

Building your first React application

You can create React application with project like create-react-app or can create customize project intialiting the project with npm and then pick and choose libraries of your choice. In this post, we will use create-react-app.

create-react-app conviently configures the tools like webpack, babel and testing libraries, so that you can concentrate purely on application code.

npx create-react-app my-first-react-app

npx is a package runner tool that comes with npm 5.2+ and higher.

This single line of code will setup Javascript based project, configures webpack, babel and testing libraries.

my-first-react-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   ├── favicon.ico
│   ├── index.html
│   └── manifest.json
└── src
    ├── App.css
    ├── App.js
    ├── App.test.js
    ├── index.css
    ├── index.js
    ├── logo.svg
    └── serviceWorker.js
    └── setupTests.js

It will also configure commands in package.json to start, build, test and eject (A command to remove transitive dependencies of webpack, babel, testing libraries and copies directly to package.json so that you can customize accordingly).

Let's run the application with npm run start and visit localhost:3000 on browser.

Creating first React component

Let's start with creating a component.

Create a new file Welcome.js and Welcome.css under src folder.

Add following lines to Welcome.js

import React from 'react';
import './Welcome.css';

function Welcome() {
  return <div className='welcome'>Welcome! My first react app</div>
}

export default Welcome;

Here, we have created a functional component which is equivalent to extending React.Component class and adding render function in it.

import React from 'react';
import './Welcome.css';

class Welcome extends React.Component {
  render() {
    return <div className='welcome'>Welcome! My first react app</div>
  }
}

export default Welcome;

Now, the first React component; let's add it to App.js

function App() {
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <Welcome />
      </header>
    </div>
  );
}

Now, go to localhost:3000 and you will see the component loaded.

Creating Typescript based first React application

Typescript is a typed superset of Javascript which compiles to plain Javascript. It provides the type safety to Javascript. create-react-app provides convenient way to change template for generating React skeleton project. Just pass the template parameter with value as follows.

npx create-react-app my-first-react-app --template typescript

Adding first typed component

Create a new file Welcome.tsx and Welcome.css under src folder.

Add following lines to Welcome.tsx

import React from 'react';
import './Welcome.css';

interface IWelcomeProps {
  message?: string
}

function Welcome(props: IWelcomeProps) {
  const message = props?.message ?? 'Welcome! My first React app with Typescript.'
  return (<div className='welcome'>{message}</div>);
}

export default Welcome;

We have defined the interface IWelcomeProps with single optional field message. In the functional component, we have used the Nullish coalescing added in Typescript 3.7.

Let's add this component to App.tsx

function App() {
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <Welcome />
        <Welcome message="Welcome Reader! My first react app with Typescript."/>
      </header>
    </div>
  );
}

We have added Welcome twice; with and without message field. The output after running the application will look like this.

Recap

We created our first React application using create-react-app project. We added first React component Welcome.js and Welcome.tsx in Javascript and Typescript based projects respectively.

What's next?

In the next post, we will build a BlogPost application using React's functional component and React hooks. We will use useState hook for state management. So, stay tuned!

Building a Blog Post application

Let's create a blog post application. It will have below features:

  • Option to search blog posts.
  • Option to list blog posts.
  • Option to show blog post.

Create a new project with npx create-react-app react-blog-posts --template typescript.

Create BlogPosts.tsx component under src/components folder and IBlogPost model under src/models.

import React from 'react';
import IBlogPost from '../models/IBlogPost';

interface IBlogPostsProps {
  posts: Array<IBlogPost>
}


function BlogPosts(props: IBlogPostsProps) {
  return (
    <div className="blog-container">
      <ul className="blog-posts">
        {
          props.posts.map(post => <li key={post.id}>{post.title}</li>)
        }
      </ul>
    </div>
  );
}

export default BlogPosts;
interface IBlogPost {
  id: number
  title: string
  content: string
  author: string
  postedOn: string
  tags: string[]
}

export default IBlogPost;

Explanation: We have created BlogPosts function which takes parameter of type IBlogPostsProps. This type contains array of posts of type IBlogPost. We are only showing title of the Blog Post in this component. Shortly, we will update this component and extract listing of BlogPosts as seperate component. For now, Let's update App.tsx and use BlogPosts to show dummy posts.

function App() {
  return (
    <div className="App-Container">
      <BlogPosts posts={POSTS}/>
    </div>
  );
}

You can get the dummy posts array from here

Run the application npm run start and you will see the page loaded with post titles.

Now, Let's create a new component BlogPost.tsx which will show the selected blog post.

import React from 'react';
import IBlogPost from '../models/IBlogPost';
import './BlogPost.css';

interface IBlogPostProps {
  post: IBlogPost
}

function BlogPost(props: IBlogPostProps) {
  const post = props.post
  return (
    <div className='blog-post'>
      <div className='blog-post-title'>{post.title}</div>
      <div className='blog-post-body'>{post.content}</div>
      <div className='blog-post-footer'>
        <div className='blog-author'>{`By ${post.author} at ${post.postedOn}`}</div>
        <div className='blog-tags'>
          <div key='tags-label'>Tags: </div>
          {post.tags.map(tag => <div key={tag}>{tag}</div>)}
        </div>
      </div>
    </div>
  );
}

export default BlogPost;

Create a new component BlogListing.tsx to list the available posts to read.

import React from 'react';

declare type IBlogPostData = {
  id: number
  title: string
}

interface IBlogListing {
  blogPosts: IBlogPostData[]
  selectedBlogPost: number
  onClick: (id: number) => void
}

function BlogListing(props: IBlogListing) {
  return(
    <div className='blog-listing'>
      <ul className="blog-posts">
        {
          props.blogPosts.map(post => <li className={props.selectedBlogPost === post.id ? 'active' : ''} key={post.id} onClick={() => props.onClick(post.id)}>{post.title}</li>)
        }
      </ul>
    </div>
  );
}

export default BlogListing;

In this component, we have declared IBlogPostData as type which holds id and title of the blog to be listed. This component takes collection of posts, selectedBlogPost(active post) and onClick function (action to perform when link is clicked) as arguments.

Now, update BlogPosts.tsx component and use BlogListing and BlogPost in it.

function BlogPosts(props: IBlogPostsProps) {
  /*1.*/const firsBlogPost = props.posts && props.posts.length > 0 ? props.posts[0] : null;
  /*2.*/const [ selectedBlogPost, setSelectedBlogPost ] = useState<IBlogPost | null>(firsBlogPost);

  /*3.*/function onBlogPostLinkClick(id: number): void {
    const selectedBlogPost = props.posts.find(post => post.id === id);
    setSelectedBlogPost(!!selectedBlogPost ? selectedBlogPost : null);
  }

  return (
    <div className="blog-container">
      <BlogListing
        selectedBlogPost={selectedBlogPost?.id ?? 0}
        blogPosts={props.posts.map(post => { return {id: post.id, title: post.title }})}
        /*4.*/onClick={onBlogPostLinkClick}
      />
      {!!selectedBlogPost ? <BlogPost post={selectedBlogPost}/>: null }
    </div>
  );
}

export default BlogPosts;

Explanation: At line 1, we retrieve the first post from list of posts passed in this component. In line 2, We are using React hook useState for local state management. We are using this to mamange state for selected post to be shown in BlogPost.tsx component. At line 3, we declared a function which updates the selectedBlogPost in local state. At line 4, we are passing onBlogPostLinkClick function as an argument to BlogListing.tsx. This function will get called when you click on the any of the post link in BlogListing.tsx component.

Run the application npm run start and you will see the page loaded with first post as selected as shown in below screenshot.

Now, we will add option to search blog posts either by title or tags. Create a component BlogSearch.tsx under src/components folder.

import React, { ChangeEvent } from 'react';
import { SearchType } from '../models/SearchType';

interface IBlogSearchProps {
  searchText: string
  selectedSearchOn: string
  onSearchChange: (searchText: string, searchType: SearchType) => void
  onSearchButtonClick: () => void
}

function BlogSearch(props: IBlogSearchProps) {
  function onSearchTextChange(event: ChangeEvent<HTMLInputElement>): void {
    props.onSearchChange(event.target.value, SearchType.SEARCH_TEXT)
  }

  function onSearchOnChange(event: ChangeEvent<HTMLSelectElement>): void {
    props.onSearchChange(event.target.value, SearchType.SEARCH_ON)
  }

  return(
    <div className="blog-search-container">
      <div className='blog-search-title'>Search Blog</div>
      <div className='blog-search-body'>
        <input type="text" className="form-control" autoComplete="off" value={props?.searchText ?? ''} onChange={onSearchTextChange}/>
        <select value={props.selectedSearchOn} className='form-control' onChange={onSearchOnChange}>
          <option value='tag'>Tags</option>
          <option value='title'>Title</option>
        </select>
        <button type="button" className="form-button" onClick={props.onSearchButtonClick}>Search</button>
      </div>
    </div>
  );
}

export default BlogSearch;

Explanation: This component expects four properties; searchText (text to be searched), selectedSearchOn(Whether it is tag or title search) and two functions one for whenever there is a change in the Search Text or Search On fields and other function for when Search button is clicked. These functions are passed on from top component BlogPosts.tsx because we are doing local state management in that component and all other components are stateless.

We also updated BlogListing.tsx to use BlogSearch.tsx component. We also changed this component; it takes the four more properties used by BlogSearch.tsx component. Finally, we have updated BlogPosts.tsx component.

import React, { useState } from 'react';
import IBlogPost from '../models/IBlogPost';
import './BlogPosts.css';
import BlogListing from './BlogListing';
import BlogPost from './BlogPost';
import { SearchType } from '../models/SearchType';

interface IBlogPostsProps {
  posts: Array<IBlogPost>
}


function BlogPosts(props: IBlogPostsProps) {
  function findFirstPost(posts: Array<IBlogPost>) : IBlogPost | null {
    return posts && posts.length > 0 ? posts[0] : null;
  }

  /*1.*/const [ posts, setPosts ] = useState(props.posts)
  /*2.*/const [ showingPost, setShowingPost ] = useState<IBlogPost | null>(findFirstPost(posts));
  /*3.*/const [ searchText, setSearchText ] = useState<string>('');
  /*4.*/const [ selectedSearchOn, setSelectedSearchOn ] = useState<string>('tag')

  /*5.*/function onBlogPostLinkClick(id: number): void {
    const newShowingPost = posts.find(post => post.id === id);
    setShowingPost(!!newShowingPost ? newShowingPost : null);
  }
  
  /*6.*/function onChangeHandler(value: string, searchType: SearchType) : void {
   if (SearchType.SEARCH_TEXT === searchType) {
    setSearchText(value)
   } else {
     setSelectedSearchOn(value)
   }
  }

  function isMatched(value: string) {
    return value.toLowerCase().includes(searchText.toLowerCase())
  }

  function filterPost(post: IBlogPost) {
    if (selectedSearchOn === 'title') {
      return isMatched(post.title)
    } else {
      return post.tags.some(isMatched)
    }
  }

  /*7.*/function onSearch() {
    if (searchText !== '') {
      const foundPosts = props.posts.filter(filterPost)
      setShowingPost(findFirstPost(foundPosts))
      setPosts(foundPosts)
    } else {
      setShowingPost(findFirstPost(props.posts))
      setPosts(props.posts)
    }
  }

  return (
    <div className="blog-container">
      <BlogListing
        showingPost={showingPost?.id ?? 0}
        blogPosts={posts.map(post => { return {id: post.id, title: post.title }})}
        onClick={onBlogPostLinkClick}
        searchText={searchText}
        onSearchChange={onChangeHandler}
        onSearchButtonClick={onSearch}
        selectedSearchOn={selectedSearchOn}
      />
      {!!showingPost ? <BlogPost post={showingPost}/>: null }
    </div>
  );
}

export default BlogPosts;

Line 1 to 4 declare posts, showingPost, searchText and selectedSearchOn respectively.

Line 5 defines onClick function whenever post link is clicked on BlogListing component. This function takes the blog id to be shown and search in the list of posts (see Line 1) in local state and updates the showingPost(see Line 2) field in the local state.

Line 6 defines a onChange function which get called whenever searchtext or searchon field is changing on BlogSearch component.Based on search type, it either updates searchText (see line 3) or selectedSearchOn (see line 4) in local state.

Line 7 defines onClick function for Search button on BlogSearch component. This function updates the posts (see line 1) and showingPost (see line 2) in the local state based on searchText (see line 3) and selectedSearchOn (See line 4) fields in the local state.

Recap

We used create-react-app module to create first React project (Javascript and Typescript based). Then, we added first React component (Welcome.js and Welcome.tsx) in the projects. We started building a blog website which have functionality to list posts, search posts and show post. Then, We created BlogPosts.tsx which was only showing the name of posts. Then, we created two components BlogListing.tsx to show the list of posts and BlogPost.tsx to show the currently viewing post. Then, we added statement management in BlogPosts.tsx to show the post whenever post link is clicked in BlogListing.tsx component. Next, we added BlogSearch.tsx component to search blog based on Title or Tags.

What's next?

In the next post, we will introduce Redux to manage the state and reselect to add selector in the application. Stay tuned!.

Note: You can download the final source code for this application from github.

In this post, we will look into how to retrieve auto-generated keys in JDBC. We will also explore usage of PreparedStatementCreator and PreparedStatementCallback in JdbcTemplate.

There are cases when you rely on Database server to auto generate values for some columns of the table. E.g. auto increment primary key, creation_date or any other column while inserting records. There is a way with which you can retrieve those auto-generated keys when you execute the insert statement. Let's see how you can do this using Spring JDBC but first we will see what PreparedStatementCreator and PreparedStatementCallback interfaces are.

What is PreparedStatementCreator?

There are cases when you want to create PreparedStatement yourself. One use case is to return auto generated keys. In that case, Spring JDBC provides you an option to do so by providing implementation for PreparedStatementCreator. Let's create an implementation of PreparedStatementCreator which sets those options.

public class ReturnGeneratedKeysPreparedStatementCreator implements PreparedStatementCreator, SqlProvider {
  private final String sql;
  private String[] generatedColumnNames;

  public ReturnGeneratedKeysPreparedStatementCreator(String sql) {
    this(sql, Collections.emptyList());
  }

  public ReturnGeneratedKeysPreparedStatementCreator(String sql, List<String> generatedColumnNames) {
    this.sql = sql;
    this.generatedColumnNames = Objects.nonNull(generatedColumnNames)
    ? generatedColumnNames.toArray(new String[generatedColumnNames.size()])
    : new String[0];
  }

  @Override
  public PreparedStatement createPreparedStatement(Connection con) throws SQLException {
    return generatedColumnNames.length > 0 ? con.prepareStatement(this.sql, this.generatedColumnNames)
    : con.prepareStatement(this.sql, Statement.RETURN_GENERATED_KEYS);
  }

  @Override
  public String getSql() {
    return this.sql;
  }

}

There are two options which you can use to retrieve generated keys. One is to get all the generated keys and other is to pass the columsn you want to retrieve.

What is PreparedStatementCallback?

In normal usage, you might never need to implement this interface but will provide implementation only if you need to execute some other code e.g. retrieval of auto-generated keys. Let's see how you can do this with implementing this interface.

class GeneratedKeysPreparedStatementCallback implements PreparedStatementCallback<Integer> {

  @Override
  public Integer doInPreparedStatement(PreparedStatement ps) throws SQLException {
    int updated = ps.executeUpdate();
    if (updated > 0) {
      try (ResultSet rs = ps.getGeneratedKeys()) {
        if (rs.next())
          return rs.getInt("id");
      }
   
      throw new DataRetrievalFailureException("There was no key auto generated by the database");
    }
    throw new DataRetrievalFailureException("Nothing was updated");
  }
}

Usage

Integer key = jdbcTemplate.execute(new ReturnGeneratedKeysPreparedStatementCreator(
    "insert into product(name, category, description) values('Acer Laptop', 'laptop', 'Predator series')"),
    new GeneratedKeysPreparedStatementCallback());
log.info(() -> String.format("Product saved in database with key: %d", key));

That's it. You can find the complete code on Github.

There are many different variants of #batchUpdate methods available in JdbcTemplate. We will specifically look into those who uses BatchPreparedStatementSetter and ParameterizedPreparedStatementSetter.

What is BatchPreparedStatementSetter?

It is an interface used by JdbcTemplate to execute batch updates. This has methods to determine the batch size and method to set parameters in the PreparedStatement. Using this, JdbcTemplate will run only execute single batch based on the batch size returned by implementation this interface.

How to use BatchPreparedStatementSetter?

Let's create a ProductBatchPreparedStatementSetter which can set parameters in the statement.

public class ProductBatchPreparedStatementSetter implements BatchPreparedStatementSetter {

  private final List products;

  public ProductBatchPreparedStatementSetter(List products) {
    Objects.requireNonNull(products);
    
    // Ideally you should do a defensive copy of this list.
    // this.products = new ArrayList<>(products);
    this.products = products;
  }

  @Override
  public void setValues(PreparedStatement ps, int i) throws SQLException {
    Product product = products.get(i);
    ps.setString(1, product.getName());
    ps.setString(2, product.getCategory());
    ps.setString(3, product.getDescription());
  }

  @Override
  public int getBatchSize() {
    return products.size();
  }

}

Usage

int[] results = jdbcTemplate.batchUpdate("insert into product (name, category, description) values(?,?,?)", 
    new ProductBatchPreparedStatementSetter(Arrays.asList(new Product("Lenovo Laptop", "laptop", "Thinkpad series laptop"),
 new Product("Acer Laptop", "laptop", "Predator series laptop"))));
log.info(() -> String.format("Inserted rows: %s", Arrays.toString(results)));

What is ParameterizedPreparedStatementSetter?

It is an interface used by JdbcTemplate to execute batch updates. It has only one method which takes PreparedStatement and Typed object as parameters. Using this, JdbcTemplate can execute multiple batch based on the batch size passed in the #batchUpdate method.

How to use ParameterizedPreparedStatementSetter?

Let's create a pretty straightforward implementation of this interface for our Product example.

ParameterizedPreparedStatementSetter<Product> pss = (ps, product) -> {
    ps.setString(1, product.getName());
    ps.setString(2, product.getCategory());
    ps.setString(3, product.getDescription());
  };

Usage

int batchSize = 5;
int[][] result = jdbcTemplate.batchUpdate("insert into product (name, category, description) values(?,?,?)",
    products, batchSize, pss);
log.info(Arrays.deepToString(result));

#batchUpdate method which uses BatchPreparedStatementSetter returns 1-D int array whereas #batchUpdate method which uses ParameterizedPreparedStatementSetter returns 2-D array. This means that BatchPreparedStatementSetter executed single batch whereas ParameterizedPreparedStatementSetter executed multiple batches.

That's it. You can find the complete code of this example on Github.

What is PreparedStatementSetter?

It is a callback interface used by JdbcTemplate after PreparedStatement is created to set the values in the statement object.

How to use it?

PreparedStatementSetter is also functional interface, so we will use lambda expression in this example to demonstrate PreparedStatementSetter's usage. We will use it in #update method of JdbcTemplate.

int updateCount = jdbcTemplate.update("insert into product(name, category, description) values(?,?,?)", ps -> {
    ps.setString(1, "Lenovo Bag");
    ps.setString(2, "bag");
    ps.setString(3, "Handcrafted bags by Lenovo");
  });
  
  log.info(() -> String.format("Product inserted: %d", updateCount));

You can get the full code of this example from here.

What is ResultSetExtractor?

It is an interface used by #query methods of JdbcTemplate. It is better suitable if you want to map one result object per ResultSet otherwise RowMapper is simpler choice to map one row of ResultSet with one object.

How to use it?

Let's first create a ResultSetExtractor which maps all the rows of ResultSet to single object. For this we will create a ProductResultSetExtractor which returns ProductResponse.

public class ProductResultSetExtractor implements ResultSetExtractor {
  private final RowMapper productRowMapper;
 
  public ProductResultSetExtractor(RowMapper productRowMapper) {
    super();
    this.productRowMapper = productRowMapper;
  }

  @Override
  public ProductResponse extractData(ResultSet rs) throws SQLException {
    final List products = new ArrayList<>();

    int rowNum = 0;
    while(rs.next()) {
      products.add(productRowMapper.mapRow(rs, rowNum));
      rowNum++;
    }

    return ProductResponse.of(products);
  }
}

Now, we will use #query method of JdbcTemplate to use this ProductResultSetExtractor to return result.

ProductResponse productResponse = jdbcTemplate.query("select * from product", new ProductResultSetExtractor(new ProductRowMapper()));

log.info(productResponse::toString);

That's it. You can find the full example code on github.

In this post, we will discuss what RowMapper is and how to use it when writing Jdbc code using Spring JDBC module.

What is RowMapper?

It is an interface of Spring JDBC module which is used by JdbcTemplate to map rows of java.sql.ResultSet. It is typically used when you query data.

Example usage of RowMapper

Let's first create a RowMapper which can map products.

class ProductRowMapper implements RowMapper {

  @Override
    public Product mapRow(ResultSet rs, int rowNum) throws SQLException {
      Product product = new Product();
      product.setId(rs.getInt("id"));
      product.setName(rs.getString("name"));
      product.setDescription(rs.getString("description"));
      product.setCategory(rs.getString("category"));
      return product;
   }
 
}

Now, we will use this ProductRowMapper in #queryForObject of JdbcTemplate.

Product product = jdbcTemplate.queryForObject("select * from product where id=1", new ProductRowMapper());
log.info(product::toString);

You can find the github code here.

What is Spring JdbcTemplate?

JdbcTemplate is the core class of Spring JDBC. It simplifies your interaction with low-level error prone details of JDBC access. You only pass the SQL statement to execute, parameters and processing logic for the returned data and rest is handled by it i.e. Opening Connection, transaction handling, error handling and closing Connection, Statement and Resultset.

How to create object of JdbcTemplate?

1. Calling no args constructor.

JdbcTemplate jdbcTemplate = new JdbcTemplate();

// You need to set datasource in later point in time and also have to call afterPropertiesSet.
jdbcTemplate.setDataSource(DataSource ds);
jdbcTemplate.afterPropertiesSet();

2. By Calling constructor with datasource.

JdbcTemplate jdbcTemplate = new JdbcTemplate(Datasource ds);

3. By Calling constructor with datasource and lazyInit parameter.

JdbcTemplate jdbcTemplate = new JdbcTemplate(DataSource dataSource, boolean lazyInit);

Querying with JdbcTemplate

There are many variants of querying using JdbcTemplate. We will look into queryForObject and queryForList method.

JdbcTemplate.queryForObject(String sql, Class<T> requiredType)

We will use this variant of #queryForObject when the ResultSet returns only single column.

Integer count = jdbcTemplate.queryForObject("select count(*) from product", Integer.class);
log.info(() -> String.format("There are total %d products", count));

JdbcTemplate.queryForObject(String sql, Class requiredType, @Nullable Object... args)

We will use this variant when we need to pass sql binding parameters.

Integer mobileProducts = jdbcTemplate.queryForObject("select count(*) from product where category=?", Integer.class, "mobile");
log.info(() -> String.format("There are total %d mobile products", mobileProducts));

JdbcTemplate.queryForList(String sql, Class elementType)

This variant is useful when query return list of values but for single column.

// E.g. getting list of product names
List mobileNames = jdbcTemplate.queryForList("select name from product where category='mobile'", String.class);
log.info(() -> String.format("Name of mobiles: %s", mobileNames.toString()));

You can get the full example code here.

In this post, we will learn how to use Elasticsearch, Logstash and Kibana for running analytics on application events and logs. Firstly, I will install all these applications on my local machine.

Installations

You can read my previous posts on how to install Elasticsearch, Logstash, Kibana and Filebeat on your local machine.

Basic configuration

I hope by now you are have installed Elasticsearch, Logstash, Kibana and Filebeat on your system. Now, Let's do few basic configurations required to be able to run analytics on application events and logs.

Elasticsearch

Open elasticsearch.yml file in [ELASTICSEARCH_INSTLLATION_DIR]/config folder and add properties to it.

cluster.name: gauravbytes-event-analyzer
node.name: node-1

Cluster name is used by Elasticsearch node to form a cluster. Node name within cluster need to be unique. We are running only single instance of Elasticsearch on our local machine. But, in production grade setup there will be master nodes, data nodes and client nodes that you will be configuring as per your requirements.

Logstash

Open logstash.yml file in [LOGSTASH_INSTALLATION_DIR]/config folder and add below properties to it.

node.name: gauravbytes-logstash
path.data: [MOUNTED_HDD_LOCATION]
config.reload.automatic: true
config.reload.interval: 30s

Creating logstash pipeline for parsing application events and logs

There are three parts in pipeline. i.e. input, filter and output. Below the pipeline conf for parsing application event and logs.

input {
    beats {
        port => "5044"
    }
}

filter {
   
    grok {
        match => {"message" => "\[%{TIMESTAMP_ISO8601:loggerTime}\] *%{LOGLEVEL:level} *%{DATA:loggerName} *- (?(.|\r|\n)*)"}
    }
 
    if ([fields][type] == "appevents") {
        json {
            source => "event"
            target => "appEvent"
        }
  
        mutate { 
            remove_field => "event"
        }

        date {
            match => [ "[appEvent][eventTime]" , "ISO8601" ]
            target => "@timestamp"
        }
  
        mutate {
            replace => { "[type]" => "app-events" }
        }
    }
    else if ([fields][type] == "businesslogs") {  
        mutate {
            replace => { "[type]" => "app-logs" }
        }
    }
 
    mutate { 
        remove_field => "message"
    }
}
output {
    elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "%{type}-%{+YYYY.MM.dd}"
    }
}

In the input section, we are listening on port 5044 for beat (filebeat to send data on this port).

In the output section, we are persisting data in Elasticsearch on an index based on type and date combination.

Let's discuss the filter section in detail.

  • 1) We are using grok filter plugin to parse plain lines of text to structured data.
    grok {
        match => {"message" => "\[%{TIMESTAMP_ISO8601:loggerTime}\] *%{LOGLEVEL:level} *%{DATA:loggerName} *- (?(.|\r|\n)*)"}
    }
    
  • 2) We are using json filter plugin to the convert event field to a json object and storing it in appEvent field.
    json {
        source => "event"
        target => "appEvent"
    }
    
  • 3) We are using mutate filter plugin to the remove data we don't require.
    mutate { 
        remove_field => "event"
    }
    
    mutate { 
        remove_field => "message"
    }
    
  • 4) We are using date filter plugin to the parse the eventTime from appEvent field to ISO8601 dateformat and then replacing its value with @timestamp field..
    date {
        match => [ "[appEvent][eventTime]" , "ISO8601" ]
        target => "@timestamp"
    }
    

Filebeat

Open the file filebeat.yml in [FILEBEAT_INSTALLATION_DIR] and below configurations.

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - E:\gauravbytes-log-analyzer\logs\AppEvents.log
  fields:
    type: appevents
  
- type: log
  enabled: true
  paths:
    - E:\gauravbytes-log-analyzer\logs\GauravBytesLogs.log
  fields:
    type: businesslogs
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3

output.logstash:
  hosts: ["localhost:5044"]

In the configurations above, we are defining two different type of filebeat prospectors; one for application events and the other for application logs. We have also defined that the output should be sent to logstash. There are many other configurations that you can do by referencing filebeat.reference.yml file in the filebeat installation directory.

Kibana

Open the kibana.yml in [KIBANA_INSTALLATION_DIR]/config folder and add below configuration to it.

elasticsearch.url: "http://localhost:9200"

We have only configured Elasticsearch url but you can change Kibana host, port, name and other ssl related configurations.

Running ELK stack and Filebeat

//running elasticsearch on windows
\bin\elasticsearch.exe

// running logstash
bin\logstash.bat -f config\gauravbytes-config.conf --config.reload.automatic

//running kibana
bin\kibana.bat

//running filebeat
filebeat.exe -e -c filebeat-test.yml -d "publish"

Creating Application Event and Log structure

I have created two classes AppEvent.java and AppLog.java which will capture information related to application events and logs. Below is the structure for both the classes.

//AppEvent.java
public class AppEvent implements BaseEvent<AppEvent> {
    public enum AppEventType {
        LOGIN_SUCCESS, LOGIN_FAILURE, DATA_READ, DATA_WRITE, ERROR;
    }

    private String identifier;
    private String hostAddress;
    private String requestIP;
    private ZonedDateTime eventTime;
    private AppEventType eventType;
    private String apiName;
    private String message;
    private Throwable throwable;
}

//AppLog.java
public class AppLog implements BaseEvent<AppLog> {
    private String apiName;
    private String message;
    private Throwable throwable;
}

Let's generate events and logs

I have created a sample application to generate dummy events and logs. You can check out the full project on github. There is a AppEventGenerator java file. Run this class with system argument -DLOG_PATH=[YOUR_LOG_DIR] to generate dummy events. If your log_path is not same as one defined in the filebeat-test.yml, then copy the log files generated by this project to the location defined in the filebeat-test.yml. You soon see the events and logs got persisted in the Elasticsearch.

Running analytics on application events and logs in Kibana dashboard

Firstly, we need to define Index pattern in Kibana to view the application events and logs. Follow step by step guide below to create Index pattern.

  • Open Kibana dashboard by opening the url (http://localhost:5601/).
  • Go to Management tab. (Left pane, last option)
  • Click on Index Patterns link.
  • You will see already created index, if any. On the left side, you will see Option to Create Index pattern. Click on it.
  • Now, define index pattern and Click next. Choose time filter field name. I choose @timestamp field for this. You can select any other timestamp field present in this Index and finally click on Create index pattern button.

Let's view Kibana dashboard

Once Index pattern is created, click on Discover tab on the left pane and select index pattern created by you in the previous steps.

You will see a beautiful GUI with a lot of options to mine the data. On the top most pane, you will see option to Auto refresh and data that you would want to fetch (Last 15 minutes, 30 minutes, 1 hour, 1 day and so on) and it will automatically refresh the dashboard.

The next lane has search box. You can further write queries to have more granular view of the data. It uses Apache Lucene's query syntax.

You can also define filters to have a more granular view of data.

This is how you can run the analytics using ELK on your application events and logs. You can also define complex custom filters, queries and create visualization dashboard. Feel free to explore Kibana's official documentation to use it to its full potential.

Java 8 introduced default and static methods in interfaces. These features allow us to add new functionality in the interfaces without breaking the existing contract for implementing classes.

How do we define default and static methods?

Default method has default and static method has static keyword in the method signature.

public interface InterfaceA {
  double someMethodA();

  default double someDefaultMethodB() {
    // some default implementation
  }
  
  static void someStaticMethodC() {
    //helper method implementation 
  }

Few important points for default method

  • You can inherit the default method.
  • You can redeclare the default method essentially making it abstract.
  • You can redefine the default method (equivalent to overriding).

Why do we need default and static methods?

Consider an existing Expression interface with existing implementation like ConstantExpression, BinaryExpression, DivisionExpression and so on. Now, you want to add new functionality of returning the signum of the evaluated result and/or want to get the signum after evaluating the expression. This can be done with default and static methods without breaking any functionality as follows.

public interface Expression {
  double evaluate();

  default double signum() {
    return signum(evaluate());
  }

  static double signum(double value) {
    return Math.signum(value);
  }
}

You can find the full code on Github.

Default methods and multiple inheritance ambiguity problem

Java support multiple inheritance of interfaces. Consider you have two interfaces InterfaceA and InterfaceB with same default method and your class implements both the interfaces.

interface InterfaceA {
  void performA();
  default void doSomeWork() {
  
  }
}

interface InterfaceB {
  void performB();

  default void doSomeWork() {
 
  }
}

class ConcreteC implements InterfaceA, InterfaceB {

}

The above code will fail to compile with error: unrelated defaults for doSomeWork() from InterfaceA and InterfaceB.

To overcome this problem, you need to override the default method.

class ConcreteC implements InterfaceA, InterfaceB {
  override
  public void doSomeWork() {

  }
}
If you don't want to provide implementation of overridden default method but want to reuse one. That is also possible with following syntax.
class ConcreteC implements InterfaceA, InterfaceB {
  override
  public void doSomeWork() {
    InterfaceB.super.doSomeWork();
  }
}

I hope you find this post informative and useful. Comments are welcome!!!.

Logstash

Logstash is data processing pipeline which ingests the data simultaneously from multiple data sources, transform it and send it to different `stash` i.e. Elasticsearch, Redis, database, rest endpoint etc. For example; Ingesting logs files; cleaning and transforming it to machine and human readable formats.

There are three components in Logstash i.e. Inputs, Filters and Outputs

Inputs

It ingests data of any kind, shape and size. For examples: Logs, AWS metrics, Instance health metrics etc.

Filters

Logstash filters parse each event, build a structure, enrich the data in event and also transform it to desired form. For example: Enriching geo-location from IP using GEO-IP filter, Anonymize PII information from events, transforming unstructured data to structural data using GROK filters etc.

Outputs

This is the sink layer. There are many output plugins i.e. Elasticsearch, Email, Slack, Datadog, Database persistence etc.

Installing Logstash

As of writing Logstash(6.2.3) requires Java 8 to run. To check the java version run the following command

java -version

The output on my system is

java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

If Java 8 is not installed then please download it from Oracle website and follows instruction for installation. Also, set the JAVA_HOME variable.

Installing from binaries

You can directly download the binaries from here.

Installing from package repositories

Installation with APT

//ADD PUBLIC SIGNING KEY
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

//add https-transports
sudo apt-get install apt-transport-https

//save the repository definition
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

//installation command
sudo apt-get update && sudo apt-get install logstash

Installation with YUM

// Download and install the public signing key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following in a new .repo file in your /etc/yum.repos.d/ directory

[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
// Installation command
sudo yum install logstash

Docker installation

You can follow the link for docker installation.

What is Elasticsearch?

Elasticsearch is highly scalable, broadly distributed open-source full text search and analytics engine. You can in very near real-time search, store and index big volume of data. It internally use Apache Lucene for indexing and storing data. Below are few use cases for it.

  • Product search for e-commerce website
  • Collecting application logs and transaction data for analyzing it for trends and anomalies.
  • Indexing instance metrics(health, stats) and doing analytics, creating alerts for instance health on regular interval.
  • For analytics/ business-intelligence applications

Elasticsearch basic concepts

We will be using few terminologies while talking about Elasticsearch. Let's see basic building blocks of Elasticsearch.

Near real-time

Elasticsearch is near real-time. What it means is that the time (latency) between the indexing of document and its availability for searching.

Cluster

It is a collection of one or multiple nodes (servers) that together holds the entire data and provide you the ability to indexing and searching the cluster for data.

Node

It is a single server that is part of your cluster. It can store data, participate in indexing and searching and overall cluster management. Node could have four different flavours i.e. master, htttp, data, coordinating/client nodes.

Index

An index is collection of similar kind/characteristics of documents. It is identified by name(all lowercase) and is refer to by name to perform indexing, search, update and delete operations against documents.

Document

It is a single unit of information that can be indexed.

Shards and Replicas

Single index can store billions of documents which can lead to storage taking up TB's of space. Single server could exceed its limitation to store such a massive information or performing search operation on that data. To solve this problem, Elasticsearch sub-divide your index into multiple units called shards.

Replication is important primarily to have high availability in case of node/shard failure and to allow to scale out your search throughput. By default Elasticsearch have 5 shards and 1 replicas which could be configured at the time of creating index.

Installing Elasticsearch

Elasticsearch requiresJava to run. As of writing this article Elasticsearch 6.2.X+ requires at least Java 8.

Installing Java 8
// Installing Open JDK
sudo apt-get install openjdk-8-jdk
 
// Installing Oracle JDK
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer
Installing Elasticsearch with tar file

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz

tar -xvf elasticsearch-6.2.4.tar.gz
Installing Elasticsearch with package manager
// import the Elasticsearch public GPG key into apt:
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

//Create the Elasticsearch source list
echo "deb http://packages.elastic.co/elasticsearch/6.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-6.x.list
  
sudo apt-get update
  
sudo apt-get -y install elasticsearch
Configuring Elasticsearch cluster

Configuration file location if you have downloaded the tar file

vi /[YOUR_TAR_LOCATION]/config/elasticsearch.yml

Configuration file location if you used package manager to install Elasticsearch

vi /etc/elasticsearch/elasticsearch.yml
Cluster Name

Use some descriptive name for cluster. Elasticsearch node will use this name to form and join cluster.

cluster.name: lineofcode-prod
Node name

To uniquely identify node in the cluster

node.name: ${HOSTNAME}
Custom attributes to node

Adding a rack to node to logically group the nodes placed on same data center/ physical machine

node.attr.rack: us-east-1
Network host

Node will bind to this hostname or IP address and advertise this host to other nodes in the cluster.

network.host: [_VPN_HOST_, _local_]
Elasticsearch does not come with authentication and authorization. So, it is suggested to never bind network host property to public IP address.
Cluster finding settings

To find and join a cluster, you need to know at least few other hostname or IP addresses. This could easily be set by discovery.zen.ping.unicast.hosts proeprty.

Changing the http port

You can configure the port number on which Elasticsearch is accessible over HTTP with http.port property.

Configuring JVM options (Optional for local/test)

You need to tweak JVM options as per your hardware configuration. It is advisable to allocate half the memory of total server available memory to Elasticsearch and rest will be taken up by Lucene and Elasticsearch threads.

// For example if your server have eight GB of RAM then set following property as
-Xms4g
-Xmx4g

Also, to avoid performance hit let elasticsearch block the memory with bootstrap.memory_lock: true property.

Elasticsearch uses concurrent mark and sweep GC and you can change it to G1GC with following configurations.

-XX:-UseParNewGC
-XX:-UseConcMarkSweepGC
-XX:+UseCondCardMark
-XX:MaxGCPauseMillis=200
-XX:+UseG1GC
-XX:GCPauseIntervalMillis=1000
-XX:InitiatingHeapOccupancyPercent=35
Starting Elasticsearch
sudo service elasticsearch restart

TADA! Elasticsearch is up and running on your local.

To have a production grade setup, I would recommend to visit following articles.

Digitalocean guide to setup production elasticsearch

Elasticsearch - Fred Thoughts

We have learnt about What is Apache Ignite?, Setting up Apache Ignite and few quick examples in last few posts. In this post, we will deep dive into Apache Ignite core Ignite classes and discuss about following internals.

  • Core classes
  • Lifecycle events
  • Client and Server mode
  • Thread pools configurations
  • Asynchronous support in Ignite
  • Resource injection

Core classes

Whenever you will be interacting with Apache Ignite in application, you will always encounter Ignite interface and Ignition class. Ignition is the main entry point to create a Ignite node. This class provides various methods to start a grid node in the network topology.

// Starting with default configuration
Ignite igniteWithDefaultConfig = Ignition.start();

// Ignite with Spring configuration xml file
Ignite igniteWithSpringCfgXMLFile = Ignition.start("/path_to_spring_configuration_xml.xml");

// ignite with java based configuration
IgniteConfiguration icfg = ...;
Ignite igniteWithJavaConfiguration = Ignition.start(icfg);

There are also other useful methods in Ignition class which we will discuss below. Ignite interface provide control over node. It has various methods to interact as data-grid, service-grid, compute-grid, schedular and many more.

Lifecycle events

Apache Ignite provides four LifecyleEvents i.e. BEFORE_NODE_START, AFTER_NODE_START, BEFORE_NODE_STOP and AFTER_NODE_STOP. It provide hook to tap these events. You need to implement LifecycleBean and set the implementation in the ignite configuration.

class IgniteLifecycleEventListener implements LifecycleBean {

    @Override
    public void onLifecycleEvent(LifecycleEventType evt) throws IgniteException {
        String message;
        switch (evt) {
            case BEFORE_NODE_START:
                message = "before_node_start event is called!";
                break;
            case AFTER_NODE_START:
                message = "after_node_start event is called!";
                break;
            case BEFORE_NODE_STOP:
                message = "before_node_stop event is called!";
                break;
            case AFTER_NODE_STOP:
                message = "after_node_stop event is called!";
                break;
            default:
                message = "Unknown event";
                break;
        }
        System.out.println(message);
    }
}

Client and Server mode

Apache Ignite node can be run in client or server mode. Server nodes participates in Computing, Caching, data grid, service grid etc. and client nodes are way to interact with server nodes to have near time caching, transaction, computing, service grid functionality. You need to explicitly define the client and server mode.

IgnitionConfiguration.setClientMode(...);

Ignition.setClientMode(...);

Thread pool configurations

System thread pool

It processes all cache related operations except SQL and some other queries and also handles computing cancellation tasks.

IgniteConfiguration.setSystemThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Public thread pool

All computations are received by processed in this thread pool.

IgnitionConfiguration.setPublicThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Queries pool

Handles the SQL queries and SCAN operation executed across the cluster.

IgnitionConfiguration.setQueryThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Services Pool

Handles service-grid calls.

IgniteConfiguration.setServicePoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Striped Pool

Accelerate basic caching operations and transactions by spreading execution on multiples stripes that don't contend with each other.

IgniteConfiguration.setStripedPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Data stream pool

Used in data streaming.

IgniteConfiguration.setDataStreamerPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Custom thread pool

You can define your own custom thread pools. These are used in compute grid. For example, you want to run another task from compute grid task and you also want to avoid the deadlocks. This could be done with custom thread pools synchronously.

IgniteConfiguration icfg = ...;
icfg.setExecutorConfiguration(new ExecutorConfiguration("myCustomThreadPool").setSize(16));
class InternalTask implements IgniteRunnable {
    private static final long serialVersionUID = 5169676352276118235L;
    
    @Override
    public void run() {
        System.out.println("Internal task executed!");
    }
}

class OuterTask implements IgniteRunnable {
    private static final long serialVersionUID = 602712410415356484L;

    @IgniteInstanceResource
    private Ignite ignite;
  
    @Override
    public void run() {
        System.out.println("Ignite Outer task!");
        ignite.compute().withExecutor("myCustomThreadPool").run(new InternalTask());
    }
  
}

// Ignite main example class
IgniteConfiguration icfg = defaultIgniteCfg("custom-thread-pool-grid");
icfg.setExecutorConfiguration(new ExecutorConfiguration("myCustomThreadPool").setSize(16));
  
try (Ignite ignite = Ignition.start(icfg)) {
    ignite.compute().run(new OuterTask());
}

Asynchronous support in Ignite

Ignite API comes with synchronous and asynchronous support. Asynchronous calls return IgniteFuture or one of its implementations. You can call the blocking get method to get value or can add listener(IgniteInClosure) which will get executed as soon as the IgniteFuture has the result.

IgniteCompute compute = ignite.compute();
IgniteFuture fut = compute.callAsync(() -> "Hello from Callable");
//blocking call
String result = fut.get();
//added listener to future which will get executed as soon as future has result.
fut.listener(f -> System.out.println(f.get());
If the IgniteFuture is already have the result from asynchronous operation by the time IgniteInClosure is passed to listen or chain method, then it will be executed synchronously with the caller thread. Otherwise closure will get executed when the asynchronous operation finishes. The closure will be called in system thread pool for asynchronous cache related operations or public thread pool in case of compute operations. So, it is recommended(at least avoid) calling cache/ compute related operations from the closure to avoid deadlocks due to thread starvations.

Resource Injection

Ignite support dependency injection of pre-defined resources which could be used in the task, jo, closure or SPI. It supports both field and method based injection.

IgniteRunnable task = new IgniteRunnable() {
    private static final long serialVersionUID = 787726700536869271L;

    @IgniteInstanceResource
    private transient Ignite ignite;
    
    @Override
    public void run() {
        System.out.println("Hello Gaurav Bytes from: " + ignite.name());
     
    }
};

In the above example code, we have used @IgniteInstanceResource annotation to inject current Ignite instance in the IgniteRunnable object. There are other pre-defined resources that you can inject in the jobs, tasks, closures and SPI.

Resource Name Description
@IgniteInstanceResource Injects current instance of Ignite API
@CacheNameResource Injects the grid-cache name provided by the CacheConfiguration.getName()
@CacheStoreSessionResource Injects the CacheStoreSession instance
@LoadBalancerResource Injects the ComputeLoadBalancer instance for load-balancing
@SpringApplicationContextResource Injects the Spring's ApplicationContext

Apart from this, there are few other resources like TaskContinuousMapperResource, TaskSessionResource, SpringResource, ServiceResource and JobContextResource.

In this article, we will show few examples on using Apache Ignite as Compute Grid, Data Grid, Service Grid and executing SQL queries on Apache Ignite. These are basic examples and use the basic api available. There will be few posts in near future which explains the available API in Compute Grid, Service Grid and Data Grid.

Ignite SQL Example

Apache Ignite comes with JDBC Thin driver support to execute SQL queries on the In memory data grid. In the example below, we will create tables, insert data into tables and get data from tables. I will assume that you are running Apache Ignite on your local environment otherwise please read setup guide for running Apache Ignite server.

Creating Tables
try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
     Statement stmt = conn.createStatement();) {
    //line 1
    stmt.executeUpdate("CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"");

    //line 2
    stmt.executeUpdate("CREATE TABLE Person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id)) WITH \"backups=1, affinityKey=city_id\"");

    stmt.executeUpdate("CREATE INDEX idx_city_name ON City (name)");

    stmt.executeUpdate("CREATE INDEX idx_person_name ON Person (name)");
}

In line 1, we are creating a City table with CacheMode as replicated which means it will be replicated on whole cluster. There are three possible values for CacheMode which is LOCAL, REPLICATED and PARTITIONED. We will discuss about this later in detail.

In line 2, we are creating Person table. You might have noticed affinityKey being used. The purpose of affinityKey is to collate the data together.

Inserting data in tables
try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO City (id, name) VALUES (?, ?)")) {

    stmt.setLong(1, 1L);
    stmt.setString(2, "Forest Hill");
    stmt.executeUpdate();

    stmt.setLong(1, 2L);
    stmt.setString(2, "Denver");
    stmt.executeUpdate();

    stmt.setLong(1, 3L);
    stmt.setString(2, "St. Petersburg");
    stmt.executeUpdate();
}

try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person (id, name, city_id) VALUES (?, ?, ?)")) {

    stmt.setLong(1, 1L);
    stmt.setString(2, "John Doe");
    stmt.setLong(3, 3L);
    stmt.executeUpdate();

    stmt.setLong(1, 2L);
    stmt.setString(2, "Jane Roe");
    stmt.setLong(3, 2L);
    stmt.executeUpdate();

    stmt.setLong(1, 3L);
    stmt.setString(2, "Mary Major");
    stmt.setLong(3, 1L);
    stmt.executeUpdate();

    stmt.setLong(1, 4L);
    stmt.setString(2, "Richard Miles");
    stmt.setLong(3, 2L);
    stmt.executeUpdate();
}
Querying data from tables
try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
     Statement stmt = conn.createStatement()) {
    try (ResultSet rs = stmt.executeQuery("SELECT p.name, c.name FROM Person p, City c WHERE p.city_id = c.id")) {
        while (rs.next())
            System.out.println(rs.getString(1) + ", " + rs.getString(2));
        }
    }
}

You can find the full example code here.

Ignite Compute Grid Example

In this example, we will use Ignite's compute grid to fetch data.

try (Ignite ignite = Ignition.start(defaultIgniteCfg("cache-reading-compute-engine"))) {
    long cityId = 1;

    ignite.compute().affinityCall("SQL_PUBLIC_CITY", cityId, new IgniteCallable<List<String>>() {
        private static final long serialVersionUID = -131151815825938052L;

        @IgniteInstanceResource
        private Ignite currentIgniteInstance;

        @Override
        public List<String> call() throws Exception {
            List<String> names = new ArrayList<>();
            IgniteCache<BinaryObject, BinaryObject> personCache = currentIgniteInstance.cache("SQL_PUBLIC_PERSON").withKeepBinary();
       
            IgniteBiPredicate<BinaryObject, BinaryObject> filter = (BinaryObject key, BinaryObject value) -> {
                return key.hasField("CITY_ID") && key.<Long>field("CITY_ID") == cityId;
            };

            ScanQuery<BinaryObject, BinaryObject> query = new ScanQuery<>(filter);

            try (QueryCursor<Entry<BinaryObject, BinaryObject>> cursor = personCache.query(query)) {
                Iterator<Entry<BinaryObject, BinaryObject>> itr = cursor.iterator();

                while (itr.hasNext()) {
                    Entry<BinaryObject, BinaryObject> cache = itr.next();
                    names.add(cache.getValue().<String>field("NAME"));
                }

            }
            return names;
         }
     }).forEach(System.out::println);;
}

In this example, we are getting list of person residing in same city. We are calling compute grid on SQL_PUBLIC_CITY cache to query with affinitykey cityId and the IgniteCallable task. In the IgniteCallable task, we have @IgniteInstanceResource which will be injected by the Ignite server running this task.

Ignite Data example

This example will usage of Ignite as in memory data grid.

try (Ignite ignite = Ignition.start(defaultIgniteCfg("ignite-data-grid"))) {
    IgniteCache personCache = ignite.getOrCreateCache("personCache");
    for (int i = 0; i < 10; i++) {
        personCache.put(i, "Gaurav " + i);
    }
   
    for (int i = 0; i < 10; i++) {
        System.out.println(personCache.get(i));
    }
}

Ignite Service grid example

interface TimeService extends Service {
    public LocalDateTime currentDateTime();
}
 
static class TimeServiceImpl implements TimeService {
    private static final long serialVersionUID = 3977097368864906176L;

    @Override
    public void cancel(ServiceContext ctx) {
        System.out.println("Service is cancelled!");
    }

    @Override
    public void init(ServiceContext ctx) throws Exception {
        System.out.println("Service is initialized!");
    }

    @Override
    public void execute(ServiceContext ctx) throws Exception {
        System.out.println("Service is deployed!");
    }

    @Override
    public LocalDateTime currentDateTime() {
        return LocalDateTime.now();
    }
}

try (Ignite ignite = Ignition.start(defaultIgniteCfg("ignite-service-grid"))) {
    ignite.services().deployClusterSingleton("timeServiceImpl", new TimeServiceImpl());
   
    TimeService timeService = ignite.services().service("timeServiceImpl");
   
    System.out.println("Current time is: " + timeService.currentDateTime());
}

If you want to deploy some service on grid than it should implement Service interface. Also, service grid deployments are not zero deployments. You need to put the compiled jars to the Ignite server instance and than need to restart the instance as well.

This is an introduction series to Apache Ignite. We will discuss about Apache Ignite, its features, usage as in-memory data grid, compute grid, distributed caching, near real-time caching and persistence distributed database.

What is Ignite?

  • It is in-memory compute platform.
  • It is in-memory data grid.
  • Durable, strongly consistent and highly available.
  • Providing option to run SQL like queries on cache (Providing JDBC API to support this).

Durable memory

Apache Ignite is memory-centric platform based on durable memory architecture. It allows you to store and processing data on in-memory(RAM) and on disk (If Ignite Native persistence is enabled). When the Ignite native persistence is enabled, it will treat disk as superset of data, which is cable of surviving crash and restarts.

In-memory features

RAM is always treated as first memory tier, all the processing happens there. It has following characteristics.

  • Off-heap based: All the data and indexes are stored outside of Java heap which helps in processing petabytes of data.
  • Since all data and indexes are off-heap based, it removes noticeable GC pauses since application code is only source possible for pause-the-world events.
  • It has predictable memory usage. You can configure memory usage with MemoryConfiguration
  • It uses memory as efficient as possible and runs defragmentation routines in the background.
  • Data and indexes on disk and in-memory are stored as same page format which improved the performance and avoids unnecessary data format conversion.

Persistence features

Here are few high-level persistence features.

  • Persistence is optional to disk. You can enable or disable it.
  • It provides data resiliency. If persistence is enabled, full dataset will be stored on physical disk and you can survives cluster restarts, crashes.
  • It can execute SQL queries on full dataset.
  • Cluster restarts are instantaneous. In-memory data will be cached automatically.

In this post, we will externalize the properties used in the application in a property file and will use PropertyPlaceHolderConfigurer to resolve the placeholder at application startup time.

Java Configuration for PropertyPlaceHolderConfigurer

@Configuration
public class AppConfig {

  @Bean
  public PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
    PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer = new PropertySourcesPlaceholderConfigurer();
    propertySourcesPlaceholderConfigurer.setLocations(new ClassPathResource("application-db.properties"));
    //propertySourcesPlaceholderConfigurer.setIgnoreUnresolvablePlaceholders(true);
    //propertySourcesPlaceholderConfigurer.setIgnoreResourceNotFound(true);
    return propertySourcesPlaceholderConfigurer;
  }
}

We created object of PropertySourcesPlaceholderConfigurer and set the Locations to search. In this example we used ClassPathResource to resolve the properties file from classpath. You can use file based Resource which need absolute path of the file.

DBProperties file

@Configuration
public class DBProperties {
 
  @Value("${db.username}")
  private String userName;
 
  @Value("${db.password}")
  private String password;
 
  @Value("${db.url}")
  private String url;

  //getters for instance fields
}

We used @Value annotation to resolve the placeholders.

Testing the configuration

public class Main {
  private static final Logger logger = Logger.getLogger(Main.class.getName());
 
  public static void main(String[] args) {
    try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class, DBProperties.class);) {
      DBProperties dbProperties = context.getBean(DBProperties.class);
      logger.info("This is dbProperties: " + dbProperties.toString());
    }
  }
}

For testing, we created object of AnnotationConfigApplicationContext and got DBProperties bean from it and logged it using Logger. This is the simple way to externalize the configuration properties from framework congfiguration. You can also get the full example code from Github.

In this post, we will discuss about Digest Authentication with Spring Security. You can also read my previous post on Basic Authentication with Spring Security.

What is Digest Authentication?

  • This authentication method makes use of a hashing algorithms to encrypt the password (called password hash) entered by the user before sending it to the server. This, obviously, makes it much safer than the basic authentication method, in which the user’s password travels in plain text (or base64 encoded) that can be easily read by whoever intercepts it.
  • There are many such hashing algorithms in java also, which can prove really effective for password security such as MD5, SHA, BCrypt, SCrypt and PBKDF2WithHmacSHA1 algorithms.
  • Please remember that once this password hash is generated and stored in database, you can not convert it back to original password. Each time user login into application, you have to regenerate password hash again, and match with hash stored in database. So, if user forgot his/her password, you will have to send him a temporary password and ask him to change it with his new password. Well, it’s common trend now-a-days.

Let's start building simple Spring Boot application with Digest Authentication using Spring Security.

Adding dependencies in pom.xml

We will use spring-boot-starter-security as maven dependency for Spring Security.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-security</artifactId>
</dependency>

Digest related Java Configuration

@Bean
DigestAuthenticationFilter digestFilter(DigestAuthenticationEntryPoint digestAuthenticationEntryPoint, UserCache digestUserCache, UserDetailsService userDetailsService) {
  DigestAuthenticationFilter filter = new DigestAuthenticationFilter();
  filter.setAuthenticationEntryPoint(digestAuthenticationEntryPoint);
  filter.setUserDetailsService(userDetailsService);
  filter.setUserCache(digestUserCache);
  return filter;
}
 
@Bean
UserCache digestUserCache() throws Exception {
  return new SpringCacheBasedUserCache(new ConcurrentMapCache("digestUserCache"));
}
 
@Bean
DigestAuthenticationEntryPoint digestAuthenticationEntry() {
  DigestAuthenticationEntryPoint digestAuthenticationEntry = new DigestAuthenticationEntryPoint();
  digestAuthenticationEntry.setRealmName("GAURAVBYTES.COM");
  digestAuthenticationEntry.setKey("GRM");
  digestAuthenticationEntry.setNonceValiditySeconds(60);
  return digestAuthenticationEntry;
}

You need to register DigestAuthenticationFilter in your spring context. DigestAuthenticationFilter requires DigestAuthenticationEntryPoint and UserDetailsService to authenticate user.

The purpose of the DigestAuthenticationEntryPoint is to send the valid nonce back to the user if authentication fails or to enforce the authentication.

The purpose of UserDetailsService is to provide UserDetails like password and list of role for that user. UserDetailsService is an interface. I have implemented it with DummyUserDetailsService which loads every passed userName's details. But, you can restrict it to some few user or make it Database backed. One thing to remember is the password passed need to be in plain text format here. You can also use InMemoryUserDetailsManager for storing handful of user configured either through Java configuration or with xml based configuration which could access your application.

In the example, I also have used the caching for UserDetails. I have used SpringBasedUserCache and underlying cache is ConcurrentMapCache. You can use any other caching solution.

Running the example

You can download the example code from Github. I will be using Postman to run the example. Here are the few steps you need to follow.

1. Open postman and enter url (localhost:8082).

2. Click on Authorization tab below the url and select Digest Auth from Type dropdown.

3. Enter username(gaurav), realm(GAURAVBYTES.COM), password(pwd), algorithm(MD5) and leave nonce as empty. Click Send button.

4. You will get 401 unauthorized as response like below.

5. If you see the Headers from the response, you will see "WWW-Authenticate" header. Copy the value of nonce field and enter in the nonce textfield.

6. Click on Send Button. Voila!!! You got the valid response.

This is how we implement Digest Authentication with Spring Security. I hope you find this post informative and helpful.

In this post, we will create a Restful web-services which will use JPA to persist the data in the embedded database(h2). Also, you can read more on Restful web-services.

Adding pom.xml dependencies

We will add spring-boot-starter-jpa to manage dependencies. We will use h2 embedded database server for persistence.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>com.h2database</groupId>
  <artifactId>h2</artifactId>
  <scope>runtime</scope>
</dependency>

Creating entities

We have three entities in the example project viz. Product, Rating, User.

@Entity
@Table(name = "product_ratings", schema = "product")
public class Rating {
  @Id
  @GeneratedValue
  @Column(name="rating_id")
  private Long ratingId;
 
  private double rating;
 
  @Column(name="product_id")
  private String productId;
 
  @Column(name="user_id")
  private String userId;
 
  public Rating() {
  
  }
 
  public Rating(Long ratingId, double rating, String productId, String userId) {
    super();
    this.ratingId = ratingId;
    this.rating = rating;
    this.productId = productId;
    this.userId = userId;
  }
  //getters, setters, toString, hashCode, equals
}

@Entity annotation specifies that this is an entity class. @Table annotation specifies the primary table for an entity class. You can configure the table_name and schema using this annotation for the entity class. @Id specifies that this field is the primary key of the entity. @GeneratedValue specifies how primary key will be generated. @Column is used to specify the mapped column for the property or field. You can also configure if the property is unique, nullable, length, precision, scale and/or if you want to insert or update it in the table.

Creating Repositories

You can extend the JpaRepository, CrudRepository interface to create your repository.

@Transactional
public interface ProductRepository extends JpaRepository<Product, String> {

}

Here, I created a ProductRepository interface which extends JpaRepository interface. You may wonder that instead of writing a repository class, we have created an interface and where will this get the implementation? The simple answer is SimpleJpaRepository class. A Proxy is generated by Spring and all the request is catered by the SimpleJpaRepository.

This contains all the basic methods like find, delete, save, findAll and few sort related/ criteria based search methods. Could be a case that you need to write your own specific method and in my case finding all the ratings of product. This could be done as follows.

@Transactional
public interface RatingRepository extends JpaRepository<Rating, Long> {
  public Iterable<Rating> getRatingsByProductId(final String productId);
}

@EnableJpaRepositories annotation

This annotation will enable JPA repositories. This will scan for Spring Data repositories in annotated configuration class by default. You can also change the basePackages to scan in this annotation.

@SpringBootApplication
@EnableJpaRepositories
public class App {
  public static void main(String[] args) {
    SpringApplication.run(App.class, args);
  }
}

In our example, we have used this annotation in our App class, so it will scan all the packages in and under com.gauravbytes.gkart

These are the few steps to create a simple JPA project. You can get the full code on Github.

Few important points

If you are using embedded server in the above example, then you may need to set the following configurations.

  • Adding schema.sql in the classpath, if you are using schema in your tables(entity classes). You can get sample here.
  • You can change the datasource name(by default testdb) and other properties. See org.springframework.boot.autoconfigure.jdbc.DataSourceProperties for full list of properties that you can configure.