The Biggest Pitfall of Multi-Threaded Performance Optimization, 99% of People Are Unaware of It!

Today we are going to talk about something hardcore and practical - multi-threaded performance optimization. Don't roll your eyes, I know this topic sounds a bit high-sounding, but don't worry, I promise that this time we won't be too high-sounding, just talk about some down-to-earth stuff, so that you will say "I see" after reading it!

1. Multithreading: It’s not easy to say I love you

Multithreaded programming is a powerful tool in modern software development. It can help you make full use of multi-core processors, improve the response speed of the program, and handle a large number of concurrent tasks. But do you know? Multithreading is like a double-edged sword. If used well, it can cut through thorns and brambles; if used poorly, it can dig your own grave.

Let's start with a simple scenario: suppose you have a task that requires you to process a large amount of data. If you use a single thread, you have to do it one by one, which is very inefficient. But if you use multithreading, hey, the speed is so fast! However, problems also arise. In a multithreaded environment, resource competition, thread safety, deadlock... These problems are like a group of little demons that come out to make trouble from time to time.

2. Pitfalls of Performance Optimization

When it comes to multi-threaded performance optimization, many people's first reaction is "Lock! Lock! Lock again!" Little do they know that this is one of the biggest pitfalls. Let's unveil it step by step.

Pitfall 1: Over-locking

First of all, we have to understand that locks are good things, they can ensure data consistency between threads and prevent competition conditions. However, locks are also bad things because they block threads and reduce concurrency.

For example:

public class Counter {
    private int count = 0;
    private final Object lock = new Object();


    public void increment() {
        synchronized (lock) {
            count++;
        }
    }


    public int getCount() {
        synchronized (lock) {
            return count;
        }
    }
}

In the above code, each increment and getCount method must be locked. This is indeed safe in a multi-threaded environment, but what about efficiency? If many threads frequently call these two methods, the lock overhead will be huge.

Solution: Reduce the granularity of the lock, or use more efficient concurrency tools, such as AtomicInteger in the java.util.concurrent package. See, isn't this concise and efficient?

import java.util.concurrent.atomic.AtomicInteger;


public class Counter {
    private final AtomicInteger count = new AtomicInteger(0);


    public void increment() {
        count.incrementAndGet();
    }


    public int getCount() {
        return count.get();
    }
}

Pitfall 2: Improper lock usage

The use of locks is very particular. If used improperly, not only will the expected effect not be achieved, but new problems may also arise, such as deadlock. If two threads call method1 and method2 respectively, then congratulations, you are deadlocked!

Deadlock example:

public class DeadlockExample {
    private final Object lock1 = new Object();
    private final Object lock2 = new Object();


    public void method1() {
        synchronized (lock1) {
            // Do something
            synchronized (lock2) {
                // Do something else
            }
        }
    }


    public void method2() {
        synchronized (lock2) {
            // Do something
            synchronized (lock1) {
                // Do something else
            }
        }
    }
}
  • twenty one.
  • twenty two.
  • twenty three.
  • twenty four.

Solution: Avoid nested locks, try to acquire locks in the same order, or use more advanced synchronization mechanisms, such as the Lock interface and its implementation classes, which provide more flexible lock acquisition methods.

Pitfall 3: Thread starvation and livelock

Thread starvation, in simple terms, means that a thread never gets a chance to execute. Livelock, on the other hand, means that threads yield to each other, causing the overall progress of the system to slow down.

Livelock example: Imagine a scenario where two threads are trying to enter a critical section, but each time they detect that the other is occupying it, so they both exit and try again after a while. As a result, the two threads keep trying and neither can enter.

Solution: Introduce randomness, such as making the thread wait for a random amount of time before retrying, or use a more complex synchronization strategy.

3. The correct approach to multi-threaded performance optimization

After talking about so many pitfalls, how can we correctly optimize multi-threaded performance? Don't worry, here are some tips for you.

1. Use the right concurrency tools

There are a lot of treasures waiting for you to discover in the Java java.util.concurrent package. For example:

  • ConcurrentHashMap: An efficient and thread-safe hash table.
  • ExecutorService: Conveniently manage thread pools and avoid manual creation and management of threads.
  • CountDownLatch, CyclicBarrier, Semaphore: advanced synchronization tools that help you control the collaboration between threads more finely.

2. Reduce lock contention

Lock contention is one of the main sources of multithreaded performance bottlenecks. How can we reduce it?

  • Segment lock: Divide the data into multiple segments, each with its own lock. In this way, data in different segments can be accessed by multiple threads at the same time.
  • Read-write lock: Read operations usually do not change data, so multiple threads can read at the same time, while write operations require exclusive locks. ReentrantReadWriteLock is a good helper.
  • Optimistic locking: Assuming that conflicts do not occur often, do not lock first, and deal with conflicts when they do occur. For example, AtomicStampedReference.

3. Optimize thread pool

Thread pool is a good thing, but it can also be a pitfall if not used properly. How to optimize it?

  • Set the number of threads appropriately: too many will cause frequent context switching, affecting performance; too few will cause the task to be processed in time. It is generally recommended to set it according to the number of CPU cores and the task type.
  • Choose the right rejection strategy: When the thread pool is full and a new task comes in, what should you do? Reject directly, throw an exception, run the task's rejection callback, or put the task in a queue? This depends on your business scenario.
  • Regular monitoring and adjustment: The state of the thread pool is dynamic, and its performance indicators, such as task processing speed, queue length, etc., must be monitored regularly, and then adjusted according to actual conditions.

4. Avoid unnecessary sharing of data

Sharing data is one of the biggest challenges in multithreaded programming. Avoid it if you can.

  • Use local variables: Local variables are thread-private and do not require synchronization.
  • Use immutable objects: Immutable objects cannot be modified once created, so they are naturally thread-safe.
  • Use thread local variables: The ThreadLocal class allows you to maintain a separate copy of the variable for each thread, so there is no need for synchronization.

5. Leverage concurrent algorithms and data structures

Some algorithms and data structures are specially designed for concurrent scenarios, use them!

  • Parallel computing framework: For example, the Fork/Join framework can help you break down large tasks into smaller ones and execute them in parallel.
  • Concurrent collections: such as CopyOnWriteArrayList, ConcurrentSkipListMap, etc., they are all thread-safe and have good performance.

IV. Conclusion

Multithreaded performance optimization is a real technical job. We have to avoid pitfalls such as excessive locking, incorrect lock usage, thread starvation, and livelock. Then, we have to learn to use concurrency tools correctly, reduce lock contention, optimize thread pools, avoid unnecessary shared data, and use concurrent algorithms and data structures.

After saying so much, do you think multithreading is not so scary? In fact, as long as you master the right method, multithreading is like a sharp sword in your hand, which can help you cut through thorns and solve various complex problems. Well, today's sharing ends here, I hope it helps you. If you have other questions or ideas, please leave a message to communicate! See you next time!