Cisco Java Developer Interview Transcript 2024(Java,Spring-Boot,Hibernate)

“Hello folks, I am jotting down the full tech interview round for a Java developer position at Cisco. All these Q&A are actual questions and will immensely help you if you are looking to enter Cisco Systems. Let’s get started.”

Ajay Rathod
17 min readApr 14, 2024

Note: Youtube video on this article—

Are you preparing for a job interview as a Java developer?

Find my book Guide To Clear Java Developer Interview here Gumroad (PDFFormat) and Amazon (Kindle eBook).

Guide To Clear Spring-Boot Microservice Interview here Gumroad (PDFFormat) and Amazon (Kindle eBook).

Download the sample copy here: Guide To Clear Java Developer Interview[Free Sample Copy]

Guide To Clear Spring-Boot Microservice Interview[Free Sample Copy]

In this interview transcript, first round was with two senior engineer, they were evaluating me for Java,Spring-boot,microservice,Database,hibernate,kafka so on and so forth.

Pro Tip:

“In one hour of interview, only important questions are always asked, and they tend to get repeated all the time. Preparing them is the key to success.”

Spring Framework and Spring Boot

What is Response Entity in Spring-Boot?

In Spring Boot, a ResponseEntity is a class used to represent the entire HTTP response sent back to a client. It goes beyond just the data itself and encapsulates three key aspects:

  • Status Code: This indicates the outcome of the request, like success (200 OK), not found (404), or internal server error (500).
  • Headers: These are optional key-value pairs that provide additional information about the response, such as content type, cache control, or authentication details.
  • Body: This is the actual data being sent back to the client. It can be anything from JSON or XML to plain text, depending on your API design.

By using ResponseEntity, you gain fine-grained control over how Spring Boot constructs the response. You can set the appropriate status code, add custom headers, and include the response data in the body. This allows you to build more informative and flexible APIs.

public class ProductController {

public ResponseEntity<Product> getProduct(@PathVariable Long id) {

// Simulate product retrieval logic
Product product = getProductFromDatabase(id);

// Check if product exists
if (product == null) {
return ResponseEntity.notFound().build(); // 404 Not Found

// Return product with OK status (200)
return ResponseEntity.ok(product);

// Simulate product retrieval from database (replace with your actual logic)
private Product getProductFromDatabase(Long id) {
// ... (implementation details)
return new Product(id, "Sample Product", 10.0);

How to configure multiple databases in spring-boot application?

This is very interesting question and gets repeated all the time in an interview.

Spring Boot offers a convenient way to configure multiple databases in your application. Here’s a breakdown of the steps involved:

1. Define Data Source Properties:

  • Spring Boot uses properties to configure data sources. You can define them in your application.yml or file.
  • Each data source needs its own set of properties, prefixed with a unique identifier. Common properties include:
  • url: Database connection URL.
  • username: Database username.
  • password: Database password.
  • driverClassName: JDBC driver class name for the database.

Here’s an example configuration for two databases named users and orders:

url: jdbc:mysql://localhost:3306/users
username: user
password: password
driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:postgresql://localhost:5432/orders
username: orders_user
password: orders_password
driverClassName: org.postgresql.Driver

2. Create DataSource Beans:

  • Spring Boot provides annotations and utilities to create DataSource beans.
  • You can use @ConfigurationProperties to map the data source properties defined earlier to a bean.
  • Here’s an example configuration class with DataSourceBuilder to create beans for each data source:
public class DataSourceConfig {

@ConfigurationProperties(prefix = "spring.datasource.users")
public DataSource usersDataSource() {
return DataSourceBuilder.create().build();

@ConfigurationProperties(prefix = "spring.datasource.orders")
public DataSource ordersDataSource() {
return DataSourceBuilder.create().build();

Configure Entity Manager and Transaction Manager (Optional):

  • If you’re using Spring Data JPA, you’ll need to configure separate Entity Managers and Transaction Managers for each data source.
  • These can be created similarly to DataSource beans, specifying the entities associated with each data source.

4. Injecting the Correct DataSource:

  • By default, Spring Boot auto-configures a single DataSource. To use specific data sources:
  • You can inject @Qualifier("usersDataSource") or @Qualifier("ordersDataSource") for specific repositories or services.
  • JPA repositories can also use @Entity annotation with a entityManagerFactoryRef attribute to specify the EntityManager.

Remember to adapt the configuration details (database type, connection details) to your specific databases.

How to declare Global Exceptions in Springboot application?[imp question]

Spring Boot offers two main approaches to declare Global Exceptions:

1. Using @ControllerAdvice:

This is the recommended approach for centralized exception handling. Here’s how it works:

  • Create a class annotated with @ControllerAdvice.
  • Define methods annotated with @ExceptionHandler to handle specific exceptions.
  • These methods can:
  • Return a custom error response object containing details about the exception.
  • Set a specific HTTP status code using ResponseEntity.
  • Log the exception for further analysis.
public class GlobalExceptionHandler {

public ResponseEntity<ErrorResponse> handleResourceNotFound(ResourceNotFoundException ex) {
ErrorResponse errorResponse = new ErrorResponse("Resource not found");
return ResponseEntity.status(HttpStatus.NOT_FOUND).body(errorResponse);

// Define methods for other exceptions you want to handle globally

What is Spring Bean LifeCycle?

What is IOC Containers?

An IoC Container is a software framework component that manages the objects (beans) in your application. It takes over the responsibility of creating, configuring, and assembling these objects and their dependencies.

How Does it Work?

  • Object Creation: Traditionally, you’d manually create objects in your code. With an IoC container, you define the objects (beans) you need in your application using configuration files (XML or annotations) or Java classes. The container then takes care of instantiating these objects.
  • Dependency Injection: Objects often rely on other objects to function properly (dependencies). Instead of manually creating and passing these dependencies around, you declare them in your object definitions. The IoC container injects (provides) the required dependencies to the objects it creates. This creates a loose coupling between objects, making your code more modular and easier to test.
  • Object Lifecycle Management: The IoC container also manages the lifecycle of objects, including initialization and destruction. This frees you from writing boilerplate code for these tasks.

What is Dependency Injections?

In software development, dependency injection (DI) is a technique for providing an object with the objects (dependencies) it needs to function. Here’s a breakdown of the key concepts:

What are Dependencies?

  • Dependencies are other objects that a class or function relies on to perform its work effectively.
  • Examples:
  • A car depends on an engine, wheels, and other parts to function.
  • A database access class depends on a database connection object to interact with the database.

What is Application Context & its's use?

In Spring Boot applications, the ApplicationContext is a central interface that plays a critical role in managing the objects (beans) used throughout your application. It’s essentially a container that provides the following functionalities:

1. Bean Management:

  • The core responsibility of the ApplicationContext is to manage the objects (beans) that make up your application.
  • These beans are typically defined using annotations or XML configuration files.
  • The ApplicationContext takes care of creating, configuring, and assembling these beans according to the specified configuration.

2. Dependency Injection:

  • Beans often rely on other beans to function properly. These are called dependencies.
  • The ApplicationContext facilitates dependency injection by automatically providing the required dependencies to the beans it creates. This eliminates the need for manual dependency creation and management, leading to loosely coupled and more maintainable code.

3. Resource Access:

  • The ApplicationContext provides access to various resources your application might need, such as property files, configuration files, and message bundles.
  • This simplifies resource retrieval and ensures consistent access throughout your code.

How to Enable multiple Eureka Servers?

This article deals with this question

What is the difference between Spring Filter and Spring Interceptors?(This is good question to check your Spring MVC framework related concepts)

HandlerInterceptor is basically similar to a Servlet Filter, but in contrast to the latter it just allows custom pre-processing with the option of prohibiting the execution of the handler itself, and custom post-processing. Filters are more powerful, for example they allow for exchanging the request and response objects that are handed down the chain. Note that a filter gets configured in web.xml, a HandlerInterceptor in the application context.

As a basic guideline, fine-grained handler-related pre-processing tasks are candidates for HandlerInterceptor implementations, especially factored-out common handler code and authorization checks. On the other hand, a Filter is well-suited for request content and view content handling, like multipart forms and GZIP compression. This typically shows when one needs to map the filter to certain content types (e.g. images), or to all requests.

So where is the difference between Interceptor#postHandle() and Filter#doFilter()?

postHandle will be called after handler method invocation but before the view being rendered. So, you can add more model objects to the view but you can not change the HttpServletResponse since it's already committed.

doFilter is much more versatile than the postHandle. You can change the request or response and pass it to the chain or even block the request processing.

Also, in preHandle and postHandle methods, you have access to the HandlerMethod that processed the request. So, you can add pre/post-processing logic based on the handler itself. For example, you can add a logic for handler methods that have some annotations.

As the doc said, fine-grained handler-related pre-processing tasks are candidates for HandlerInterceptor implementations, especially factored-out common handler code and authorization checks. On the other hand, a Filter is well-suited for request content and view content handling, like multipart forms and GZIP compression. This typically shows when one needs to map the filter to certain content types (e.g. images), or to all requests.

Why to use @Transactional annotation what's the benefit?(In spring interview @Transactional annotation is always asked)

@Transactional is a Spring annotation that can be applied to methods or classes to indicate that the annotated code should be executed within a transaction. When Spring encounters the @Transactional annotation, it automatically creates a transaction around the annotated code and manages the transaction lifecycle.

By default, @Transactional creates a transaction with the default isolation level (usually READ_COMMITTED) and the default propagation behavior (REQUIRED). However, you can customize these settings by passing parameters to the annotation.

Here’s an example of using @Transactional in a Spring service class:

public class UserService {

private UserRepository userRepository;

public void createUser(String name, String email) {
User user = new User(name, email);;

In this example, the createUser() method is annotated with @Transactional, which means that the save() method of the UserRepository will be executed within a transaction.

Advantages of @Transactional:

The @Transactional annotation provides several benefits:

  1. Simplifies transaction management: By using @Transactional, you can avoid writing boilerplate code to create and manage transactions manually. Spring takes care of transaction management for you, so you can focus on writing business logic.
  2. Promotes consistency and integrity: Transactions ensure that multiple database operations are executed atomically, which helps to maintain data consistency and integrity.
  3. Improves performance: Transactions can improve database performance by reducing the number of round trips between the application and the database.
  4. Supports declarative programming: With @Transactional, you can use declarative programming to specify transaction management rules. This makes your code more concise and easier to read.

What’s the main difference between HashTable and ConcurrentHashmap?

key difference between Hashtable and ConcurrentHashMap in Java:


  • Hashtable: Uses a single lock for the entire table. This means only one thread can access the table at a time, even for reads, creating a bottleneck in high-concurrency scenarios.
  • ConcurrentHashMap: Uses fine-grained locking at the bucket level (segments). This allows concurrent reads and limited concurrent writes, significantly improving performance in multi-threaded environments.

What's the main difference between HashTable and Hashmap?

If there is Memory Leak in your application how will you find it?

Symptoms of a Memory Leak

  1. Severe performance degradation when the application is continuously running for a long time.
  2. OutOfMemoryError heap error in the application.
  3. Spontaneous and strange application crashes.
  4. The application is occasionally running out of connection objects.

What is the use of Stringtokenizer?

The string tokenizer class allows an application to break a string into tokens. The tokenization method is much simpler than the one used by the StreamTokenizer class. The StringTokenizer methods do not distinguish among identifiers, numbers, and quoted strings, nor do they recognize and skip comments.

The set of delimiters (the characters that separate tokens) may be specified either at creation time or on a per-token basis.

An instance of StringTokenizer behaves in one of two ways, depending on whether it was created with the returnDelims flag having the value true or false:

  • If the flag is false, delimiter characters serve to separate tokens. A token is a maximal sequence of consecutive characters that are not delimiters.
  • If the flag is true, delimiter characters are themselves considered to be tokens. A token is thus either one delimiter character, or a maximal sequence of consecutive characters that are not delimiters.

A StringTokenizer object internally maintains a current position within the string to be tokenized. Some operations advance this current position past the characters processed.

A token is returned by taking a substring of the string that was used to create the StringTokenizer object.

The following is one example of the use of the tokenizer. The code:

StringTokenizer st = new StringTokenizer("this is a test"); 
while (st.hasMoreTokens()) {

What is Spring Security Context?

SecurityContext — is obtained from the SecurityContextHolder and contains the Authentication of the currently authenticated user. Authentication — Can be the input to AuthenticationManager to provide the credentials a user has provided to authenticate or the current user from the SecurityContext .

What is the Difference between these two — Object Level Locking and Class Level Locking?

In concurrent programming, synchronization is essential to maintain data consistency when multiple threads access shared resources. Locking mechanisms achieve synchronization by restricting access to these resources, ensuring only one thread operates on them at a time. There are two primary locking approaches in object-oriented programming languages like Java: Object Level Locking and Class Level Locking.

Object Level Locking

  • Applies to individual objects: Every object in Java has a unique lock associated with it.
  • Achieved using the synchronized keyword with non-static methods or code blocks.
  • Ensures only one thread can execute a synchronized method on a specific object at a time.
  • Other threads attempting to access the same synchronized method on the same object will be blocked until the lock is released.
  • Suitable for synchronizing access to instance variables and methods of an object.
  • Maintains granularity, allowing concurrent access to different objects of the same class.

Class Level Locking

  • Applies to the entire class: Achieved using the synchronized keyword with static methods.
  • Only one thread can execute a synchronized static method of a class, regardless of the object instance.
  • All other threads attempting to access synchronized static methods will be blocked.
  • Useful for synchronizing access to static variables and methods of a class.
  • Offers a broader level of control but can lead to more significant performance overhead compared to object level locking.

Between these two who will consume more memory(int or Integer)?

In most programming languages, int will consume less memory than Integer. Here's why:

  • int is a primitive data type: It represents the basic integer value itself and directly stores the number in memory. The size is typically fixed, often 4 bytes (32 bits) on most modern systems.
  • Integer is a class (or object): In languages like Java, Integer is a class that wraps around an int value. It provides additional functionalities beyond storing the number, like methods for conversion or advanced math operations. This extra functionality comes at a memory cost.

What is Weak HashMap?

WeakHashMap is a special implementation of the Map interface in Java. It differs from a regular HashMap in how it handles keys:

  • Key Storage: In a WeakHashMap, keys are stored using WeakReferences. This means the keys themselves are not considered strong references that prevent garbage collection (GC).
  • Automatic Removal: When the only reference to a key in the WeakHashMap is the weak reference within the map itself, and there are no other strong references to the key elsewhere in the program, GC can reclaim the key’s memory. As a result, the corresponding key-value pair is automatically removed from the WeakHashMap.

Use Cases for WeakHashMap:

  • Cache Implementation: WeakHashMap is useful for creating caches where entries can be automatically removed if they are no longer being actively used. This helps with memory management as unused entries are discarded by GC.
  • Weak References: When you need to associate data with an object but don’t want to prevent its garbage collection, a WeakHashMap can be a good choice.

What is the difference between these interfaces- Predicates,Supplier,Consumer and Function?

Very Good link on functional interfaces down below,

Repeated Questions in every Java Dev Interview

I am writing down repeated question from database and hibernate, Not putting the answer as they are very easy to know. If you don't know please comment i will provide the answers. You can also provide the answers.

Consider this as a homework lol :)

Difference between Callable and Runnable?

Runnable and Callable are both interfaces in Java that are designed for classes whose instances are intended to be executed by a thread. However, there are some key differences between them:


  • The Runnable interface has a single method called run() that takes no arguments and returns no result.
public interface Runnable {
void run();
  • If you need to return a result from your Runnable object, or throw a checked exception, you have to use a workaround, such as modifying a variable that is visible outside the Runnable.


  • The Callable interface has a single method called call(), which can return a value and can throw an exception.
public interface Callable<V> {
V call() throws Exception;
  • Callable is designed to be used with ExecutorService, which can return a Future representing the result of the computation.

if you need to perform a computation in a thread and you don’t need to return a result, you can use Runnable. If you need to return a result, or if the computation can throw an exception, you should use Callable.

Difference Primary key and Unique key?

Primary Key:

Uniqueness: A primary key is a column (or a set of columns) in a table that uniquely identifies each row. It must contain unique and non-null values for each row.

Null Values:Primary key columns cannot contain null values. Every row in the table must have a value for the primary key.

Constraints:Each table can have only one primary key. It uniquely identifies rows in the table and is used as a reference point for relationships with other tables.

Automatically Indexed:By default, primary key columns are automatically indexed, which can improve the performance of queries.

Unique Key:

Uniqueness:A unique key constraint ensures that the values in a column (or a set of columns) are unique. Unlike a primary key, a unique key can allow null values, but if a column is marked as unique, only one null value is allowed.

Null Values:Unique key columns can contain null values, but only one null value is allowed per unique key column.

Constraints:Each table can have multiple unique key constraints, each ensuring uniqueness within its specified column(s).

Not Automatically Indexed:Unique key columns are not automatically indexed. However, it’s common practice to manually create an index on unique key columns to improve query performance.

What is the use of Triggers in Database?

Triggers in databases are like mini-programs that run automatically in response to events (inserts, updates, deletes) on a table. They are used for:

  • Data validation and integrity: ensure data meets specific rules.
  • Automating tasks: trigger actions like notifications or calculations based on data changes.
  • Data auditing: track who, what, and when data was modified.

Difference Prepared Statement and Statement?

Difference SQL and NoSQL Database?

Here’s the core difference between SQL and NoSQL databases:

SQL databases:

  • Relational: Data is stored in tables with relationships between them.
  • Structured query language (SQL) for access and manipulation.
  • Predefined schema (data structure) for strong data integrity.
  • Vertically scalable (adding more powerful hardware).
  • Good for complex queries and related data.

NoSQL databases:

  • Non-relational: Data can be stored in various formats (documents, key-value pairs, graphs).
  • Less rigid schema, allowing for flexible data structures.
  • Horizontally scalable (adding more servers).
  • Faster for handling large, unstructured data sets.

What is DataBase Indexing?

Database indexing is like an organized filing system for your database tables. It’s a special data structure that significantly speeds up data retrieval by allowing quick access to specific information.

Imagine a large library without an index. Finding a specific book would involve searching every shelf, one by one. An index in a database works like the library’s card catalog — it points you directly to the location of the data you need without scanning the entire table.

What is Sharding in database?

Sharding in a database is a technique for splitting a large database into smaller, more manageable pieces called shards. These shards are then distributed across multiple servers or nodes. Here’s a breakdown of how it works:

  • Imagine a huge bookshelf: This bookshelf represents your entire database, overflowing with books (data).
  • Sharding is like dividing the bookshelf: You split the data into smaller sections based on a chosen criteria (like genre, author, publication date). Each section becomes a shard.
  • Distributing the shards: Each shard is then placed on a separate server, like placing the categorized books on different shelves in different rooms.

Difference Hibernate First Level and Second Level Caching?

Hibernate offers two levels of caching to improve application performance by reducing database calls: First-Level Cache and Second-Level Cache. Here’s a breakdown of their key differences:


  • First-Level Cache (L1 Cache): Session-specific. Exists for the duration of a single Hibernate session. Data loaded by one query within a session is available to subsequent queries within the same session without hitting the database again.
  • Second-Level Cache (L2 Cache): Optional, application-wide cache. Shared across all Hibernate sessions associated with the same session factory. Data loaded by one session can be reused by other sessions, significantly reducing database interactions.

Difference Get and Load in hibernate?

Data Fetching Strategy:

  • get: Performs an immediate database query to fetch the object identified by the ID.
  • If the object exists in the database, it’s returned as a fully populated object.
  • If the object doesn’t exist, get returns null.
  • load: Returns a proxy object representing the identified entity.
  • The actual data from the database is retrieved only when you access a property or method of the object. This technique is called Lazy Loading.
  • If the object doesn’t exist in the database, load throws an ObjectNotFoundException when you try to access its properties.

Database Interaction:

  • get: Always triggers a database query, even if the object is already in the Hibernate cache (first-level cache).
  • load: Might not trigger a database query immediately if the object is in the cache. The query happens only when you access the object’s data.

Difference Save and Persist in hibernate?


  • Tries to insert a new record into the database.
  • If the object already has an identifier (primary key) assigned, it assumes an update operation and performs an update query.
  • Returns the generated identifier (if applicable).


  • Marks the object as managed by Hibernate for persistence.
  • The actual insert operation happens when the transaction is committed, not necessarily immediately.
  • Does not return anything (void).

Transaction Context:

  • save: Can be called within or outside of a transaction. If outside a transaction, the insert might happen right away depending on the Hibernate configuration.
  • persist: Requires being called within a transaction. This ensures data consistency and avoids potential issues.

Thanks For Reading

  • 👏 Please clap for the story and follow me 👉
  • 📰 Read more content on my Medium (60 stories on Java Developer interview)

Find my books here: