Top 25 Java Developer Interview Question Series — 25

I interviewed for a Senior Software Engineer role at a multinational corporation (MNC), where the discussion focused on core topics like Java fundamentals, Spring Boot, Kafka, and the challenges of working with microservices. This experience serves as a concise guide to the critical areas you need to master, making it ideal for both quick revision and thorough preparation.

Ajay Rathod
16 min readJan 19, 2025

Are you preparing for a job interview as a Java developer?

Find my book Guide To Clear Java Developer Interview here Gumroad(PDF Format) and Amazon (Kindle eBook).

Guide To Clear Spring-Boot Microservice Interview here Gumroad (PDF Format) and Amazon (Kindle eBook).

Guide To Clear Front-End Developer Interview here Gumroad (PDF Format) and Amazon (Kindle eBook).

Download the sample copy here:

Guide To Clear Java Developer Interview[Free Sample Copy]

Guide To Clear Spring-Boot Microservice Interview[Free Sample Copy]

Guide To Clear Front-End Developer Interview [Sample_Copy]

If you are looking for Personalise guidance here is the 1:1 link — https://topmate.io/ajay_rathod11

Here is the Below list of Questions :

Find the actual longest substring without repeating characters (given input: “abcabcbb”)

public class LongestSubstringWithoutRepeating {
public static void main(String[] args) {
String input = "abcabcbb";
String result = longestSubstringWithoutRepeatingCharacters(input);
System.out.println("The longest substring without repeating characters is: '" + result + "'");
}

public static String longestSubstringWithoutRepeatingCharacters(String s) {
int[] charIndexMap = new int[128]; // ASCII size
int start = 0;
int maxLength = 0;
String longestSubstr = "";

for (int end = 0; end < s.length(); end++) {
char currentChar = s.charAt(end);
start = Math.max(start, charIndexMap[currentChar]);
charIndexMap[currentChar] = end + 1;
if (end - start + 1 > maxLength) {
maxLength = end - start + 1;
longestSubstr = s.substring(start, end + 1);
}
}

return longestSubstr;
}
}

What are Spring Boot Annotations used for Data Layer?

Spring Boot provides several annotations for the data layer to simplify database interactions and ORM (Object-Relational Mapping). Here are some commonly used annotations:

  1. @Entity: Specifies that the class is an entity and is mapped to a database table.
  2. @Table: Specifies the table in the database with which the entity is mapped.
  3. @Id: Specifies the primary key of an entity.
  4. @GeneratedValue: Specifies the generation strategy for the primary key.
  5. @Column: Specifies the mapped column for a persistent property or field.
  6. @OneToOne: Specifies a one-to-one relationship between two entities.
  7. @OneToMany: Specifies a one-to-many relationship between two entities.
  8. @ManyToOne: Specifies a many-to-one relationship between two entities.
  9. @ManyToMany: Specifies a many-to-many relationship between two entities.
  10. @JoinColumn: Specifies the foreign key column.
  11. @Repository: Indicates that the class is a repository, which is an abstraction of data access and storage.
  12. @Transactional: Specifies the transactional behavior for methods or classes.

These annotations help in defining the data model and managing database operations in a Spring Boot application.

What are the Key Kafka Components?

Apache Kafka is a distributed streaming platform with several key components:

  1. Producer: An application that sends messages to Kafka topics.
  2. Consumer: An application that reads messages from Kafka topics.
  3. Topic: A category or feed name to which records are stored and published. Topics are partitioned and replicated across multiple brokers.
  4. Partition: A single log within a topic. Each partition is an ordered, immutable sequence of records.
  5. Broker: A Kafka server that stores data and serves clients. Multiple brokers form a Kafka cluster.
  6. Cluster: A group of Kafka brokers working together. A cluster can span multiple servers.
  7. Zookeeper: A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. Kafka uses Zookeeper to manage brokers and topics.
  8. Producer API: Allows applications to send streams of data to topics in the Kafka cluster.
  9. Consumer API: Allows applications to read streams of data from topics in the Kafka cluster.
  10. Streams API: Allows applications to process data in real-time using stream processing.
  11. Connect API: Allows for building and running reusable data import/export connectors that consume from or produce to Kafka topics.

These components work together to provide a robust and scalable messaging system.

What is Kafka Topics?

In Apache Kafka, a Topic is a logical channel to which producers send records and from which consumers read records. Here are some key points about Kafka topics:

  1. Logical Grouping: Topics are used to categorize messages. Each topic is identified by a unique name.
  2. Partitions: Each topic is divided into partitions, which are ordered, immutable sequences of records. Partitions allow Kafka to scale horizontally by distributing data across multiple brokers.
  3. Replication: Partitions can be replicated across multiple brokers to ensure fault tolerance. Each partition has one leader and multiple followers.
  4. Offset: Each record within a partition has a unique offset, which is a sequential ID that identifies the record’s position within the partition.
  5. Retention: Kafka retains records in a topic for a configurable amount of time or until a configurable size limit is reached, regardless of whether they have been consumed.
  6. Durability: Records are written to disk and replicated across brokers to ensure durability and reliability.

What are the Key Challenges in Microservice Communication?

Microservice communication presents several key challenges:

  1. Service Discovery: Identifying the network locations of service instances dynamically.
  2. Load Balancing: Distributing requests across multiple service instances to ensure even load distribution.
  3. Latency and Network Reliability: Ensuring low latency and handling network failures or delays.
  4. Data Consistency: Maintaining consistency across distributed services, especially in the presence of failures.
  5. Security: Securing communication channels and ensuring proper authentication and authorization.
  6. Message Formats: Agreeing on common data formats and protocols for communication (e.g., JSON, XML, gRPC).
  7. Error Handling and Retries: Managing errors and implementing retry mechanisms to handle transient failures.
  8. Circuit Breaking: Preventing cascading failures by implementing circuit breakers to stop calls to failing services.
  9. Monitoring and Logging: Tracking and logging inter-service communication for debugging and performance monitoring.
  10. Versioning: Managing different versions of services to ensure backward compatibility and smooth upgrades.
  11. Scalability: Ensuring that the communication mechanism scales with the number of services and instances.
  12. Transaction Management: Handling distributed transactions and ensuring atomicity and consistency across services.

How Spring Security Works in Authentication and Authorization?

What is the Difference between @Interceptor and @Filter in Spring Boot?

In Spring Boot, @Interceptor and @Filter are used to intercept and process requests, but they serve different purposes and operate at different levels of the request processing lifecycle.

@Interceptor

  • Purpose: Interceptors are used to intercept and process HTTP requests and responses at the controller level.
  • Scope: They are typically used for tasks such as logging, authentication, and authorization before the request reaches the controller or after the response leaves the controller.
  • Implementation: Implemented by creating a class that implements the HandlerInterceptor interface and overriding its methods (preHandle, postHandle, afterCompletion).
  • Configuration: Configured by registering the interceptor with a WebMvcConfigurer implementation.
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;

@Configuration
public class WebConfig implements WebMvcConfigurer {
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new MyInterceptor());
}
}

@Filter

  • Purpose: Filters are used to perform filtering tasks on the request and response before they reach the servlet or after they leave the servlet.
  • Scope: They operate at a lower level than interceptors and are part of the servlet specification. Filters are used for tasks such as request logging, compression, and security checks.
  • Implementation: Implemented by creating a class that implements the Filter interface and overriding its doFilter method.
  • Configuration: Configured by annotating the filter class with @WebFilter or by registering it as a Filter bean in a @Configuration class.
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebFilter;
import java.io.IOException;

@WebFilter(urlPatterns = "/*")
public class MyFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
// Pre-processing
chain.doFilter(request, response);
// Post-processing
}

@Override
public void init(FilterConfig filterConfig) throws ServletException {
// Initialization code
}

@Override
public void destroy() {
// Cleanup code
}
}

Key Differences

  • Level: Filters operate at the servlet level, while interceptors operate at the Spring MVC controller level.
  • Use Cases: Filters are used for generic request/response processing, while interceptors are used for tasks specific to Spring MVC, such as handling model attributes or view rendering.
  • Configuration: Filters are configured via the servlet container or Spring Boot configuration, while interceptors are configured via Spring MVC configuration.

What is Dependency Injection?

Dependency Injection (DI) is a design pattern used in software development to achieve Inversion of Control (IoC) between classes and their dependencies. It allows an object to receive its dependencies from an external source rather than creating them itself. This promotes loose coupling, easier testing, and better maintainability.

Key Concepts

  1. Dependency: An object that another object depends on.
  2. Injection: The process of providing the dependencies to an object.

Types of Dependency Injection

  1. Constructor Injection: Dependencies are provided through a class constructor.
  2. Setter Injection: Dependencies are provided through setter methods.
  3. Field Injection: Dependencies are injected directly into fields (less common and generally not recommended due to lack of immutability and testability).

What are AWS Key Components?

Amazon Web Services (AWS) offers a wide range of cloud computing services. Here are some of the key components:

Compute

  1. Amazon EC2 (Elastic Compute Cloud): Provides scalable virtual servers.
  2. AWS Lambda: Serverless compute service that runs code in response to events.
  3. Amazon ECS (Elastic Container Service): Container orchestration service.
  4. Amazon EKS (Elastic Kubernetes Service): Managed Kubernetes service.

Storage

  1. Amazon S3 (Simple Storage Service): Scalable object storage service.
  2. Amazon EBS (Elastic Block Store): Block storage for use with EC2 instances.
  3. Amazon EFS (Elastic File System): Scalable file storage for use with EC2.

Database

  1. Amazon RDS (Relational Database Service): Managed relational database service.
  2. Amazon DynamoDB: Managed NoSQL database service.
  3. Amazon Aurora: High-performance managed relational database.

Networking

  1. Amazon VPC (Virtual Private Cloud): Isolated cloud resources.
  2. Amazon Route 53: Scalable DNS and domain name registration.
  3. AWS Direct Connect: Dedicated network connection to AWS.

Security

  1. AWS IAM (Identity and Access Management): Manage access to AWS services and resources.
  2. AWS KMS (Key Management Service): Managed service for creating and controlling encryption keys.
  3. AWS Shield: Managed DDoS protection service.

Management and Monitoring

  1. AWS CloudWatch: Monitoring and observability service.
  2. AWS CloudTrail: Tracks user activity and API usage.
  3. AWS Config: Tracks AWS resource configurations.

Analytics

  1. Amazon EMR (Elastic MapReduce): Big data processing using Hadoop.
  2. Amazon Redshift: Data warehousing service.
  3. Amazon Kinesis: Real-time data streaming service.

Machine Learning

  1. Amazon SageMaker: Build, train, and deploy machine learning models.
  2. AWS Rekognition: Image and video analysis.
  3. Amazon Comprehend: Natural language processing.

Developer Tools

  1. AWS CodeCommit: Managed source control service.
  2. AWS CodeBuild: Continuous integration service.
  3. AWS CodeDeploy: Automated deployment service.
  4. AWS CodePipeline: Continuous delivery service.

Application Integration

  1. Amazon SQS (Simple Queue Service): Message queuing service.
  2. Amazon SNS (Simple Notification Service): Pub/sub messaging service.
  3. AWS Step Functions: Coordinate distributed applications and microservices.

Content Delivery

  1. Amazon CloudFront: Content delivery network (CDN).
  2. AWS Global Accelerator: Improves availability and performance of applications.

Migration and Transfer

  1. AWS DMS (Database Migration Service): Migrate databases to AWS.
  2. AWS Snowball: Data transfer service using physical devices.

These components provide a comprehensive set of tools and services to build, deploy, and manage applications in the cloud.

What is Lambda Expressions in Java?

Lambda expressions in Java are a feature introduced in Java 8 that provide a clear and concise way to represent one method interface using an expression. They enable you to treat functionality as a method argument or treat code as data. Lambda expressions are particularly useful for writing concise and readable code, especially when working with collections and streams.

Syntax

The syntax of a lambda expression consists of three parts:

  1. Parameter list: A comma-separated list of parameters enclosed in parentheses.
  2. Arrow token: The -> symbol.
  3. Body: A block of code enclosed in braces or a single expression.
import java.util.Arrays;
import java.util.List;

public class LambdaExample {
public static void main(String[] args) {
List<String> names = Arrays.asList("Alice", "Bob", "Charlie");

// Using a lambda expression to print each name
names.forEach(name -> System.out.println(name));
}
}

What is Intermediate Functions in Streams?

In Java Streams, intermediate operations are functions that transform a stream into another stream. These operations are lazy, meaning they are not executed until a terminal operation is invoked on the stream. Intermediate operations are used to process and filter data in a pipeline.

Common Intermediate Operations

  1. filter: Filters elements based on a predicate.
  2. map: Transforms each element using a given function.
  3. flatMap: Transforms each element into a stream and flattens the resulting streams into a single stream.
  4. distinct: Removes duplicate elements.
  5. sorted: Sorts the elements based on a comparator.
  6. peek: Performs an action on each element as it is consumed from the stream.
  7. limit: Truncates the stream to a given number of elements.
  8. skip: Skips the first N elements of the stream.

What is the Difference between flatMap and map in Stream?

In Java Streams, map and flatMap are both intermediate operations used to transform elements in a stream. However, they serve different purposes and have distinct behaviors.

map

The map method transforms each element of the stream into another element using a provided function. It applies the function to each element and returns a new stream consisting of the transformed elements.

flatMap

The flatMap method is used to transform each element of the stream into a stream of other elements, and then flatten the resulting streams into a single stream. It is particularly useful when dealing with nested collections or when each element needs to be transformed into multiple elements.

Key Differences

Transformation:

  • map: Transforms each element into another element.
  • flatMap: Transforms each element into a stream of elements and then flattens the resulting streams into a single stream.

Return Type:

  • map: Returns a stream of the same structure but with transformed elements.
  • flatMap: Returns a flattened stream of elements.

Use Case:

  • map: Used when you want to apply a function to each element and get a single result for each element.
  • flatMap: Used when you want to apply a function to each element that results in multiple elements and you want to flatten the results into a single stream.

What is Functional Interface in Java?

A functional interface in Java is an interface that contains exactly one abstract method. These interfaces can have any number of default or static methods but must have only one abstract method. Functional interfaces are used as the basis for lambda expressions and method references, enabling functional programming in Java.

Key Characteristics

  1. Single Abstract Method (SAM): A functional interface must have exactly one abstract method.
  2. @FunctionalInterface Annotation: This annotation is optional but recommended. It indicates that the interface is intended to be a functional interface and helps the compiler enforce this constraint.

Common Functional Interfaces

Java provides several built-in functional interfaces in the java.util.function package, such as:

  1. Function<T, R>: Represents a function that takes one argument and produces a result.
  2. Consumer<T>: Represents an operation that takes a single argument and returns no result.
  3. Supplier<T>: Represents a supplier of results.
  4. Predicate<T>: Represents a predicate (boolean-valued function) of one argument.
  5. UnaryOperator<T>: Represents an operation on a single operand that produces a result of the same type.
  6. BinaryOperator<T>: Represents an operation upon two operands of the same type, producing a result of the same type.

Explain these collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))

In Java Streams, the collect method is used to transform the elements of a stream into a different form, such as a list, set, or map. The Collectors utility class provides various collector implementations, including groupingBy and counting. Let's explore the differences between Collectors.groupingBy(Function.identity()) and Collectors.groupingBy(Function.identity(), Collectors.counting()).

Collectors.groupingBy(Function.identity())

The Collectors.groupingBy method is used to group the elements of a stream based on a classifier function. When using Function.identity(), the elements are grouped by themselves. This results in a Map where the keys are the elements of the stream, and the values are lists of elements that are equal to the key.

Collectors.groupingBy(Function.identity(), Collectors.counting())

The Collectors.groupingBy method can also take a downstream collector as a second argument. When using Collectors.counting(), the downstream collector counts the number of elements in each group. This results in a Map where the keys are the elements of the stream, and the values are the counts of those elements.

What is Executor Service in Java?

The ExecutorService in Java is a higher-level replacement for working directly with threads. It is part of the java.util.concurrent package and provides a framework for managing a pool of threads to execute tasks asynchronously. The ExecutorService simplifies the process of managing thread life cycles and allows for more flexible and scalable concurrent programming.

Key Features

  1. Thread Pool Management: Manages a pool of worker threads, reusing them to execute multiple tasks.
  2. Task Submission: Allows tasks to be submitted for execution using various methods.
  3. Task Scheduling: Supports scheduling tasks to run after a delay or periodically.
  4. Graceful Shutdown: Provides methods to shut down the executor service gracefully.

Common Methods

  1. submit: Submits a task for execution and returns a Future representing the task's result.
  2. invokeAll: Executes a collection of tasks and returns a list of Future objects.
  3. invokeAny: Executes a collection of tasks and returns the result of one that completes successfully.
  4. shutdown: Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted.
  5. shutdownNow: Attempts to stop all actively executing tasks and halts the processing of waiting tasks.

What is Metaspace in Java 8?

In Java 8, the Metaspace is a new memory space introduced to replace the Permanent Generation (PermGen) space. The Metaspace is part of the Java Virtual Machine (JVM) and is used to store class metadata. Here are the key points about Metaspace:

Key Characteristics

  1. Native Memory: Unlike PermGen, which was part of the Java heap, Metaspace is allocated from native memory (outside the Java heap).
  2. Automatic Sizing: By default, Metaspace can grow dynamically as needed, limited only by the available system memory.
  3. Class Metadata Storage: Stores class definitions, method metadata, and other class-related information.
  4. Garbage Collection: Metaspace is managed by the JVM, and class metadata is garbage collected when classes are no longer needed.

Advantages Over PermGen

  1. Eliminates PermGen Issues: PermGen had fixed size limits, which often led to OutOfMemoryError: PermGen space errors. Metaspace, being dynamically resizable, reduces the likelihood of such errors.
  2. Improved Performance: Metaspace can improve performance by reducing the need for frequent garbage collection of class metadata.
  3. Simplified Configuration: Metaspace requires less configuration compared to PermGen, as it can grow automatically.

What is Mark and Sweep Algorithm in Java?

The Mark and Sweep algorithm is a fundamental garbage collection technique used in Java to manage memory. It is designed to identify and reclaim memory occupied by objects that are no longer reachable or needed by the application. The algorithm operates in two main phases: the mark phase and the sweep phase.

Phases of the Mark and Sweep Algorithm

Mark Phase:

  • Objective: Identify all reachable objects.
  • Process: The garbage collector starts from a set of root references (e.g., local variables, static fields, and active threads) and traverses the object graph. During this traversal, it marks all reachable objects.
  • Marking: Each object that is encountered during the traversal is marked (typically by setting a flag in the object’s header).

Sweep Phase:

  • Objective: Reclaim memory occupied by unmarked objects.
  • Process: The garbage collector scans the heap for objects that were not marked during the mark phase. These unmarked objects are considered unreachable and their memory is reclaimed.
  • Reclamation: The memory occupied by unmarked objects is added back to the pool of free memory, making it available for future allocations.

What is Old and Young Generations in Java Garbage Collection?

  • Young Generation: Where new objects are allocated. It includes the Eden space and two survivor spaces. Minor GCs occur frequently in this generation.
  • Old Generation: Where long-lived objects are stored. Major GCs occur less frequently but involve the entire heap.
  • Generational Hypothesis: Most objects die young, so separating objects by age optimizes garbage collection.
  • Efficiency: Generational garbage collection improves efficiency and reduces pause times by focusing on the Young Generation for frequent collections.

What is ClassLoader in Java?

What is @ConditionalOnBean Property?

In Spring Framework, @ConditionalOnBean is an annotation used to conditionally enable a bean based on the presence of certain other beans in the Spring ApplicationContext. This is part of Spring's conditional configuration annotations, which allow for more flexible and dynamic bean registration.

Key Characteristics

  • Conditional Bean Registration: @ConditionalOnBean allows a bean to be registered only if certain other beans are present in the ApplicationContext.
  • Flexible Configuration: It helps in creating more modular and flexible configurations by enabling or disabling beans based on the presence of other beans.

Usage

The @ConditionalOnBean annotation can be used on a class or a method. When used on a class, it conditions the entire class's bean registration. When used on a method, it conditions the registration of the method's return value as a bean.

What is Optional.of and Optional.ofNullable in Java?

In Java, the Optional class is a container object which may or may not contain a non-null value. It is used to represent optional values and to avoid NullPointerException. The Optional class provides several methods to create and manipulate optional values, including Optional.of and Optional.ofNullable.

Optional.of

The Optional.of method is used to create an Optional instance with a non-null value. If the provided value is null, it throws a NullPointerException.

Optional.ofNullable

The Optional.ofNullable method is used to create an Optional instance that may hold a null value. If the provided value is null, it returns an empty Optional (i.e., an Optional with no value).

Key Differences

Null Handling:

  • Optional.of: Throws NullPointerException if the provided value is null.
  • Optional.ofNullable: Returns an empty Optional if the provided value is null.

Use Case:

  • Optional.of: Use when you are certain that the value is non-null.
  • Optional.ofNullable: Use when the value may be null.

What is Optional.orElse in Java?

In Java, the Optional class provides a method called orElse that is used to return a default value if the Optional is empty. This method is part of the java.util package and is useful for handling cases where a value might be absent, allowing you to provide a fallback value.

What is Optional.orElseGet Example in Java?

In Java, the Optional.orElseGet method is used to provide a default value when the Optional is empty. Unlike orElse, which takes a direct value, orElseGet takes a Supplier functional interface. This allows the default value to be generated lazily, meaning the Supplier is only executed if the Optional is empty.

What is Parallel Streams in Java?

Parallel Streams in Java are a feature introduced in Java 8 that allow for parallel processing of data. They leverage the Fork/Join framework to split the data into multiple chunks, process them in parallel, and then combine the results. This can lead to significant performance improvements for large datasets by utilizing multiple CPU cores.

Key Characteristics

  1. Parallel Processing: Parallel streams divide the workload across multiple threads, enabling concurrent execution.
  2. Fork/Join Framework: Utilizes the Fork/Join framework under the hood to manage and execute tasks in parallel.
  3. Automatic Splitting: Automatically splits the data into smaller chunks for parallel processing.
  4. Combining Results: Combines the results of the parallel tasks into a single result.

Creating Parallel Streams

You can create a parallel stream from a collection or an array using the parallelStream method or by calling the parallel method on an existing stream.

Thanks for reading

  • 👏 Please clap for the story and follow me 👉
  • 📰 Read more content on my Medium (70 stories on Java Developer interview)

Find my books here:

--

--

Ajay Rathod
Ajay Rathod

Written by Ajay Rathod

Java Programmer | AWS Certified | Writer | Find My Books on Java Interview here - https://rathodajay10.gumroad.com | YouTube - https://www.youtube.com/@ajtheory

No responses yet