Sathish
Sathish

Reputation: 5173

How GC suspends/blocks the application threads

I understand that GC gets triggered when a new object allocation fails or System.gc() is called. Every GC algorithm suggests that as a first step the GC thread will suspend all the application threads so that they won't affect the GC activity.

But I would like to understand how GC suspends all the running threads? I mean is there any safe points defined by JVM, for example, memory allocation (new object creation) or method invocation, and when application thread reaches these safe points they will be blocked against a GC lock. Is it true? If so, then how about an application thread that does only a simple computation as follows (I know in reality this will never happen), will it ever get suspended?

while(true) {
    a = a + s;
    s = s + a;

    // some computation that doesn't touch any JVM safe points 
}

In these cases, does GC activity carry on without suspending these application threads (and suspend/block later when they try to cross a safe point, for example object allocation)?

But i believe, GC always waits for these application threads to enter the safe points and suspends them before proceeding. Is my assumption true?

Upvotes: 3

Views: 2365

Answers (1)

the8472
the8472

Reputation: 43052

But I would like to understand how GC suspends all the running threads?

The hotspot implementation uses safepoint polling. To quote:

How safepoints work?

Safepoint protocol in HotSpot JVM is collaborative. Each application thread checks safepoint status and park itself in safe state in safepoint is required. For compiled code, JIT inserts safepoint checks in code at certain points (usually, after return from calls or at back jump of loop). For interpreted code, JVM have two byte code dispatch tables and if safepoint is required, JVM switches tables to enable safepoint check.

Safepoint status check itself is implemented in very cunning way. Normal memory variable check would require expensive memory barriers. Though, safepoint check is implemented as memory reads a barrier. Then safepoint is required, JVM unmaps page with that address provoking page fault on application thread (which is handled by JVM’s handler). This way, HotSpot maintains its JITed code CPU pipeline friendly, yet ensures correct memory semantic (page unmap is forcing memory barrier to processing cores).

more detailed description from the mechanical-sympathy mailing list.


 // some computation that doesn't touch any JVM safe points 

The compiler only allows those things if it can prove that they finish in a finite amount of time. Otherwise it inserts safepoint polls

Upvotes: 8

Related Questions