Skip to content

Commit 513ae83

Browse files
committed
2 parents 13f6f21 + d45a3d7 commit 513ae83

File tree

2 files changed

+284
-1
lines changed

2 files changed

+284
-1
lines changed

CheatSheet.md

+181
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,181 @@
1+
# Cheat Sheet
2+
## Concurrent Data Structures
3+
4+
### 1. ConcurrentHashMap
5+
- **Usage**: A thread-safe variant of `HashMap`.
6+
- **Example**:
7+
```
8+
ConcurrentMap<String, Integer> map = new ConcurrentHashMap<>();
9+
map.put("key", 1);
10+
int value = map.get("key");
11+
12+
### 2. CopyOnWriteArrayList
13+
- **Usage**: A thread-safe variant of ArrayList for read-mostly scenarios.
14+
- **Example**:
15+
```
16+
List<String> list = new CopyOnWriteArrayList<>();
17+
list.add("element");
18+
for (String s : list) {
19+
System.out.println(s);
20+
}
21+
```
22+
23+
### 3. BlockingQueue
24+
- **Usage**: Thread-safe queues that block on operations when necessary.
25+
26+
Types:
27+
ArrayBlockingQueue
28+
LinkedBlockingQueue
29+
PriorityBlockingQueue
30+
- **Example**:
31+
```
32+
BlockingQueue<String> queue = new ArrayBlockingQueue<>(10);
33+
queue.put("element");
34+
String element = queue.take();
35+
```
36+
37+
### 4. ConcurrentSkipListMap
38+
Usage: A thread-safe variant of TreeMap for scalable sorted maps.
39+
- **Example**:
40+
41+
```
42+
ConcurrentNavigableMap<String, Integer> map = new ConcurrentSkipListMap<>();
43+
map.put("key", 1);
44+
int value = map.get("key");
45+
```
46+
47+
### 5. DelayQueue
48+
Usage: A thread-safe queue that holds elements until a delay has expired.
49+
- **Example**:
50+
```
51+
DelayQueue<DelayedElement> queue = new DelayQueue<>();
52+
queue.put(new DelayedElement());
53+
DelayedElement element = queue.take();
54+
```
55+
56+
## Using ExecutorService
57+
58+
Creating an ExecutorService
59+
```
60+
Fixed Thread Pool:
61+
62+
ExecutorService executor = Executors.newFixedThreadPool(5);
63+
64+
Single Thread Executor:
65+
66+
ExecutorService executor = Executors.newSingleThreadExecutor();
67+
68+
Cached Thread Pool:
69+
70+
ExecutorService executor = Executors.newCachedThreadPool();
71+
72+
Scheduled Thread Pool:
73+
74+
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
75+
```
76+
77+
Submitting Tasks
78+
79+
```
80+
Runnable Task:
81+
82+
executor.submit(() -> {
83+
System.out.println("Task executed");
84+
});
85+
86+
Callable Task:
87+
88+
Future<String> future = executor.submit(() -> {
89+
return "Task result";
90+
});
91+
```
92+
93+
Shutting Down ExecutorService
94+
Shutdown gracefully:
95+
96+
```
97+
executor.shutdown();
98+
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
99+
executor.shutdownNow();
100+
}
101+
102+
Force shutdown:
103+
104+
105+
executor.shutdownNow();
106+
107+
```
108+
109+
## Good Practices
110+
111+
1. Use Appropriate Data Structures
112+
Choose thread-safe data structures from the java.util.concurrent package.
113+
Example: Use ConcurrentHashMap instead of HashMap.
114+
115+
3. Minimize Locking
116+
Prefer using higher-level concurrency utilities like Semaphore, CountDownLatch, CyclicBarrier over explicit locks.
117+
Example:
118+
```
119+
Semaphore semaphore = new Semaphore(1);
120+
semaphore.acquire();
121+
try {
122+
// critical section
123+
} finally {
124+
semaphore.release();
125+
}
126+
```
127+
128+
3. Use Executors for Thread Management
129+
Avoid manual thread creation; use ExecutorService for better management.
130+
Example:
131+
```
132+
ExecutorService executor = Executors.newFixedThreadPool(5);
133+
executor.submit(() -> {
134+
// task
135+
});
136+
executor.shutdown();
137+
```
138+
139+
4. Handle Exceptions in Tasks
140+
Ensure that tasks submitted to an executor service handle exceptions properly.
141+
Example:
142+
```
143+
executor.submit(() -> {
144+
try {
145+
// task
146+
} catch (Exception e) {
147+
e.printStackTrace();
148+
}
149+
});
150+
```
151+
152+
5. Properly Shutdown ExecutorService
153+
Always shutdown ExecutorService to release resources.
154+
Example:
155+
```
156+
executor.shutdown();
157+
try {
158+
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
159+
executor.shutdownNow();
160+
}
161+
} catch (InterruptedException e) {
162+
executor.shutdownNow();
163+
}
164+
```
165+
6. Avoid Blocking Operations in Tasks
166+
Avoid long-running or blocking operations inside tasks to keep the thread pool responsive.
167+
Example:
168+
169+
```
170+
executor.submit(() -> {
171+
// Avoid blocking calls like Thread.sleep() or I/O operations
172+
});
173+
```
174+
175+
7. Use Thread-Safe Collections
176+
Use collections from java.util.concurrent for thread safety.
177+
Example:
178+
```
179+
ConcurrentMap<String, Integer> map = new ConcurrentHashMap<>();
180+
```
181+

README.md

+103-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,104 @@
11
# Concurrency-Multithreading-and-Parallel-Computing-in-Java
2-
Concurrency, Multithreading and Parallel Computing in Java
2+
3+
# Understanding Multithreading and Parallelism
4+
5+
## Types of Memory
6+
7+
### Registers
8+
- **Location**: Inside the CPU.
9+
- **Speed**: Fastest type of memory.
10+
- **Purpose**: Holds data that the CPU is currently processing.
11+
- **Example**: Program counter, instruction register, accumulator.
12+
13+
### Cache Memory
14+
- **Levels**: L1, L2, and sometimes L3.
15+
- **Location**: Closer to the CPU than main memory (RAM).
16+
- **Speed**: Faster than RAM but slower than registers.
17+
- **Purpose**: Stores frequently accessed data to speed up processing.
18+
- **Example**: Instructions and data that are likely to be reused by the CPU.
19+
20+
### Main Memory (RAM)
21+
- **Location**: External to the CPU.
22+
- **Speed**: Slower than cache and registers.
23+
- **Purpose**: Stores data and instructions that the CPU needs while running programs.
24+
- **Example**: The currently running applications and their data.
25+
26+
### Secondary Storage
27+
- **Types**: Hard drives (HDD), solid-state drives (SSD).
28+
- **Speed**: Much slower than RAM.
29+
- **Purpose**: Persistent storage for data and programs.
30+
- **Example**: Operating system, applications, and files.
31+
32+
## CPU Architecture
33+
34+
### Cores
35+
- Modern CPUs have multiple cores, each capable of executing its own thread.
36+
- Each core has its own set of registers and often its own L1 cache.
37+
38+
### Hyper-Threading (Simultaneous Multithreading)
39+
- Allows each core to execute multiple threads.
40+
- Improves utilization of CPU resources by switching between threads.
41+
42+
### Instruction Pipeline
43+
- **Stages**: Fetch, decode, execute, memory access, write-back.
44+
- Allows the CPU to work on multiple instructions simultaneously, increasing throughput.
45+
46+
### Control Unit (CU)
47+
- Directs the operation of the processor.
48+
- Fetches instructions from memory and decodes them.
49+
50+
### Arithmetic Logic Unit (ALU)
51+
- Performs arithmetic and logical operations.
52+
53+
### Cache Hierarchy
54+
- **L1 Cache**: Smallest and fastest, specific to each core.
55+
- **L2 Cache**: Larger than L1, may be shared between cores.
56+
- **L3 Cache**: Even larger, often shared among all cores in a CPU.
57+
58+
## Multithreading and Parallelism
59+
60+
### Concurrency
61+
- Multiple threads make progress by sharing CPU time.
62+
- Useful for I/O-bound tasks where threads can be paused while waiting for I/O operations to complete.
63+
64+
### Parallelism
65+
- Multiple threads run simultaneously on different cores.
66+
- Useful for CPU-bound tasks that can be divided into independent subtasks.
67+
68+
## Memory and Threading
69+
70+
### Shared Memory
71+
- Threads within the same process share the same memory space.
72+
- **Benefits**: Efficient communication between threads.
73+
- **Challenges**: Requires synchronization to avoid race conditions.
74+
75+
### Synchronization Mechanisms
76+
- **Mutexes**: Ensure that only one thread can access a resource at a time.
77+
- **Semaphores**: Control access to a resource that can handle a fixed number of concurrent accesses.
78+
- **Monitors**: A combination of mutex and condition variables for thread synchronization.
79+
- **Atomic Variables**: Ensure thread-safe operations on shared data without needing explicit locks.
80+
81+
### Thread Local Storage
82+
- Each thread has its own local storage, preventing interference between threads.
83+
84+
## Important Considerations
85+
86+
### Race Conditions
87+
- Occur when two threads access shared data simultaneously and at least one access is a write.
88+
- **Solution**: Use synchronization mechanisms to control access to shared data.
89+
90+
### Deadlock
91+
- Occurs when two or more threads are waiting indefinitely for resources held by each other.
92+
- **Solution**: Avoid circular wait conditions, use timeout mechanisms.
93+
94+
### Livelock
95+
- Occurs when threads continuously change their state in response to each other but no progress is made.
96+
- **Solution**: Implement proper coordination and avoid excessive retries.
97+
98+
### Thread Safety
99+
- Ensuring that shared data is accessed in a thread-safe manner.
100+
- Use thread-safe collections (e.g., `ConcurrentHashMap`), synchronization mechanisms, and atomic variables.
101+
102+
## Note :
103+
Understanding the types of memory and how a CPU works helps in designing efficient multithreaded applications. It involves recognizing the importance of CPU cores, cache hierarchy, and synchronization mechanisms to manage concurrent access to shared resources. Properly handling synchronization can prevent issues like race conditions, deadlocks, and livelocks, ensuring efficient and correct execution of multithreaded programs.
104+

0 commit comments

Comments
 (0)