development

java.lang.OutOfMemoryError 잡기?

big-blog 2020. 8. 28. 07:52
반응형

java.lang.OutOfMemoryError 잡기?


대한 문서java.lang.Error :

오류는 합리적인 응용 프로그램이 포착하려고 시도해서는 안되는 심각한 문제를 나타내는 Throwable의 하위 클래스입니다.

그러나 java.lang.Error의 하위 클래스와 마찬가지로 java.lang.Throwable이러한 유형의 Throwable을 잡을 수 있습니다.

이런 종류의 예외를 잡는 것이 좋은 생각이 아닌 이유를 이해합니다. 내가 아는 한, 우리가 그것을 잡기로 결정하면 catch 핸들러는 자체적으로 메모리를 할당해서는 안됩니다. 그렇지 않으면 OutOfMemoryError다시 던져 질 것입니다.

그래서 제 질문은 :

  1. 잡는 java.lang.OutOfMemoryError것이 좋은 생각 일 수있는 실제 시나리오가 있습니까?
  2. catch하기로 결정한 경우 java.lang.OutOfMemoryErrorcatch 핸들러가 자체적으로 메모리를 할당하지 않는지 어떻게 확인할 수 있습니까 (도구 또는 모범 사례)?

나는 여기에있는 대부분의 답변에 동의하고 동의하지 않습니다.

OutOfMemoryError내 경험 (Windows 및 Solaris JVM에서) 을 잡을 수있는 여러 시나리오가 있지만 JVM에 OutOfMemoryError대한 죽음 은 매우 드물게 발생 합니다.

를 잡을 수있는 좋은 이유는 단 한 가지이며 OutOfMemoryError, 정상적으로 종료하고 리소스를 깨끗하게 해제하고 가능한 한 최선의 실패 이유를 기록하는 것입니다 (아직 가능한 경우).

일반적으로 OutOfMemoryError힙의 나머지 자원으로 만족할 수없는 블록 메모리 할당으로 인해 발생합니다.

이 때 Error힙이 실패 할당 이전과 할당 된 객체의 동일한 금액을 포함 지금 런타임 정리 필요할 수 있습니다 더 많은 메모리를 해제하는 객체에 대한 참조를 드롭 할 때입니다 발생합니다. 이 경우 계속할 수도 있지만 JVM이 복구 가능한 상태에 있다는 것을 100 % 확신 할 수는 없으므로 확실히 나쁜 생각입니다.

OutOfMemoryErrorJVM이 catch 블록의 메모리 부족을 의미하지 않는 데모 :

private static final int MEGABYTE = (1024*1024);
public static void runOutOfMemory() {
    MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
    for (int i=1; i <= 100; i++) {
        try {
            byte[] bytes = new byte[MEGABYTE*500];
        } catch (Exception e) {
            e.printStackTrace();
        } catch (OutOfMemoryError e) {
            MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
            long maxMemory = heapUsage.getMax() / MEGABYTE;
            long usedMemory = heapUsage.getUsed() / MEGABYTE;
            System.out.println(i+ " : Memory Use :" + usedMemory + "M/" + maxMemory + "M");
        }
    }
}

이 코드의 출력 :

1 : Memory Use :0M/247M
..
..
..
98 : Memory Use :0M/247M
99 : Memory Use :0M/247M
100 : Memory Use :0M/247M

중요한 것을 실행하는 경우 일반적 Error으로을 잡아서 syserr에 기록한 다음 선택한 로깅 프레임 워크를 사용하여 기록한 다음 리소스를 해제하고 깨끗한 방식으로 종료합니다. 일어날 수있는 최악의 상황은 무엇입니까? 어쨌든 JVM은 죽어 가고 있고 (또는 이미 죽었다), 잡으면 Error적어도 정리할 기회가있다.

주의 할 점은 정리가 가능한 곳에서만 이러한 유형의 오류를 포착해야한다는 것입니다. 담요하지 마십시오 catch(Throwable t) {}처럼 사방 또는 넌센스.


다음과 같이 복구 할 있습니다 .

package com.stackoverflow.q2679330;

public class Test {

    public static void main(String... args) {
        int size = Integer.MAX_VALUE;
        int factor = 10;

        while (true) {
            try {
                System.out.println("Trying to allocate " + size + " bytes");
                byte[] bytes = new byte[size];
                System.out.println("Succeed!");
                break;
            } catch (OutOfMemoryError e) {
                System.out.println("OOME .. Trying again with 10x less");
                size /= factor;
            }
        }
    }

}

하지만 말이 되나요? 그 밖의 무엇을 하시겠습니까? 처음에 그렇게 많은 메모리를 할당하는 이유는 무엇입니까? 적은 메모리도 괜찮습니까? 어쨌든 이미 그것을 사용하지 않는 이유는 무엇입니까? 또는 그것이 가능하지 않다면 JVM에 처음부터 더 많은 메모리를 제공하지 않는 이유는 무엇입니까?

질문으로 돌아 가기 :

1 : java.lang.OutOfMemoryError를 잡을 때 실제 단어 시나리오가 있습니까?

아무 것도 떠오르지 않습니다.

2 : 만약 우리가 java.lang.OutOfMemoryError를 잡았다면 어떻게 catch 핸들러가 자체적으로 메모리를 할당하지 않는다는 것을 확신 할 수 있습니까 (어떤 도구 나 최선의 방법)?

OOME의 원인에 따라 다릅니다. try블록 외부에서 선언되고 단계별로 발생하면 기회가 거의 없습니다. 미리 일부 메모리 공간을 예약 있습니다 .

private static byte[] reserve = new byte[1024 * 1024]; // Reserves 1MB.

그런 다음 OOME 중에 0으로 설정하십시오.

} catch (OutOfMemoryException e) {
     reserve = new byte[0];
     // Ha! 1MB free!
}

물론 이것은 전혀 의미가 없습니다.) 응용 프로그램에 필요한만큼 JVM에 충분한 메모리를 제공하십시오. 필요한 경우 프로파일 러를 실행하십시오.


일반적으로 OOM을 잡아서 복구하는 것은 좋지 않습니다.

  1. OOME은 응용 프로그램이 알지도 못하는 스레드를 포함하여 다른 스레드에서도 발생할 수 있습니다. 이러한 스레드는 이제 죽고 알림을 기다리고 있던 모든 스레드가 영원히 멈출 수 있습니다. 간단히 말해, 앱이 종료 될 수 있습니다.

  2. 성공적으로 복구하더라도 JVM은 여전히 ​​힙 부족에 시달릴 수 있으며 결과적으로 애플리케이션은 비정상적으로 수행됩니다.

OOME을 사용하는 가장 좋은 방법은 JVM이 죽도록하는 것입니다.

(이는 JVM 죽는다고 가정합니다 . 예를 들어 Tomcat 서블릿 스레드의 OOM은 JVM을 죽이지 않으며 이로 인해 Tomcat이 요청에 응답하지 않는 긴장 상태가됩니다. 재시작.)

편집하다

나는 OOM을 전혀 잡는 것이 나쁜 생각이라고 말하는 것이 아닙니다. 문제는 의도적으로 또는 감독을 통해 OOME에서 복구를 시도 할 때 발생합니다. OOM (직접 또는 오류 또는 Throwable의 하위 유형으로)을 포착 할 때마다이를 다시 발생 시키거나 애플리케이션 / JVM이 종료되도록 조정해야합니다.

참고 : 이것은 OOM에 대한 최대의 견고성을 위해 응용 프로그램이 Thread.setDefaultUncaughtExceptionHandler ()사용 하여 OOME이 발생하는 스레드에 관계없이 OOME 이벤트에서 응용 프로그램이 종료되도록하는 처리기를 설정 해야 함을 의미합니다 . 나는 이것에 대한 의견에 관심이 있습니다 ...

유일한 다른 시나리오는 OOM이 부수적 손상을 초래하지 않았 음 을 확신 하는 경우입니다. 즉 :

  • 구체적으로 OOME의 원인,
  • 그 당시 애플리케이션이 무엇을하고 있었는지, 그 계산을 그냥 버리는 것이 괜찮습니다.
  • (대략) 동시 OOME이 다른 스레드에서 발생할 수 없습니다.

There are applications where it is possible to know these things, but for most applications you cannot know for sure that continuation after an OOME is safe. Even if it empirically "works" when you try it.

(The problem is that it a formal proof is required to show that the consequences of "anticipated" OOMEs are safe, and that "unanticipated" OOME's cannot occur within the control of a try/catch OOME.)


Yes, there are real-world scenarios. Here's mine: I need to process data sets of very many items on a cluster with limited memory per node. A given JVM instances goes through many items one after the other, but some of the items are too big to process on the cluster: I can catch the OutOfMemoryError and take note of which items are too big. Later, I can re-run just the large items on a computer with more RAM.

(Because it's a single multi-gigabyte allocation of an array that fails, the JVM is still fine after catching the error and there's enough memory to process the other items.)


There are definitely scenarios where catching an OOME makes sense. IDEA catches them and pops up a dialog to let you change the startup memory settings (and then exits when you are done). An application server might catch and report them. The key to doing this is to do it at a high level on the dispatch so that you have a reasonable chance of having a bunch of resources freed up at the point where you are catching the exception.

Besides the IDEA scenario above, in general the catching should be of Throwable, not just OOM specifically, and should be done in a context where at least the thread will be terminated shortly.

Of course most times memory is starved and the situation is not recoverable, but there are ways that it makes sense.


I came across this question because I was wondering whether it is a good idea to catch OutOfMemoryError in my case. I'm answering here partially to show yet another example when catching this error can make sense to someone (i.e. me) and partially to find out whether it is a good idea in my case indeed (with me being an uber junior developer I can never be too sure about any single line of code I write).

Anyway, I'm working on an Android application which can be run on different devices with different memory sizes. The dangerous part is decoding a bitmap from a file and dislaying it in an ImageView instance. I don't want to restrict the more powerful devices in terms of the size of decoded bitmap, nor can be sure that the app won't be run on some ancient device I've never come across with very low memory. Hence I do this:

BitmapFactory.Options bitmapOptions = new BitmapFactory.Options(); 
bitmapOptions.inSampleSize = 1;
boolean imageSet = false;
while (!imageSet) {
  try {
    image = BitmapFactory.decodeFile(filePath, bitmapOptions);
    imageView.setImageBitmap(image); 
    imageSet = true;
  }
  catch (OutOfMemoryError e) {
    bitmapOptions.inSampleSize *= 2;
  }
}

This way I manage to provide for more and less powerful devices according to their, or rather their users' needs and expectations.


Yes, the real question is "what are you going to do in the exception handler?" For almost anything useful, you'll allocate more memory. If you'd like to do some diagnostic work when an OutOfMemoryError occurs, you can use the -XX:OnOutOfMemoryError=<cmd> hook supplied by the HotSpot VM. It will execute your command(s) when an OutOfMemoryError occurs, and you can do something useful outside of Java's heap. You really want to keep the application from running out of memory in the first place, so figuring out why it happens is the first step. Then you can increase the heap size of the MaxPermSize as appropriate. Here are some other useful HotSpot hooks:

-XX:+PrintCommandLineFlags
-XX:+PrintConcurrentLocks
-XX:+PrintClassHistogram

See the full list here


I have an application that needs to recover from OutOfMemoryError failures, and in single-threaded programs it always works, but sometimes doesn't in multi-threaded programs. The application is an automated Java testing tool that executes generated test sequences to the maximum possible depth on test classes. Now, the UI must be stable, but the test engine can run out of memory while growing the tree of test cases. I handle this by the following kind of code idiom in the test engine:

boolean isOutOfMemory = false;  // flag used for reporting
try {
   SomeType largeVar;
   // Main loop that allocates more and more to largeVar
   // may terminate OK, or raise OutOfMemoryError
}
catch (OutOfMemoryError ex) {
   // largeVar is now out of scope, so is garbage
   System.gc();                // clean up largeVar data
   isOutOfMemory = true;       // flag available for use
}
// program tests flag to report recovery

This works every time in single-threaded applications. But I recently put my test engine into a separate worker-thread from the UI. Now, the out of memory may occur arbitrarily in either thread, and it is not clear to me how to catch it.

For example, I had the OOME occur while the frames of an animated GIF in my UI were being cycled by a proprietary thread that is created behind-the-scenes by a Swing class that is out of my control. I had thought that I had allocated all the resources needed in advance, but clearly the animator is allocating memory every time it fetches the next image. If anyone has an idea about how to handle OOMEs raised in any thread, I would love to hear.


OOME can be catched but is going to be generally useless, depending on if the JVM is able to garbage collect some objects when the catch is reached, and how many heap memory is left by that time.

Example: in my JVM, this program runs to completion:

import java.util.LinkedList;
import java.util.List;

public class OOMErrorTest {             
    public static void main(String[] args) {
        List<Long> ll = new LinkedList<Long>();

        try {
            long l = 0;
            while(true){
                ll.add(new Long(l++));
            }
        } catch(OutOfMemoryError oome){         
            System.out.println("Error catched!!");
        }
        System.out.println("Test finished");
    }  
}

However, just adding a single line on the catch will show you what I'm talking about:

import java.util.LinkedList;
import java.util.List;

public class OOMErrorTest {             
    public static void main(String[] args) {
        List<Long> ll = new LinkedList<Long>();

        try {
            long l = 0;
            while(true){
                ll.add(new Long(l++));
            }
        } catch(OutOfMemoryError oome){         
            System.out.println("Error catched!!");
            System.out.println("size:" +ll.size());
        }
        System.out.println("Test finished");
    }
}

The first program runs fine because when the catch is reached, the JVM detects that the list isn't going to be used anymore (This detection can be also an optimization made at compile time). So when we reach the print statement, the heap memory has been freed almost entirely, so we now have a wide margin of maneuver to continue. This is the best case.

However, if the code is arranged such as the list ll is used after the OOME has been catched, the JVM is unable to collect it. This happens in the second snippet. The OOME, triggered by a new Long creation, is catched, but soon we're creating a new Object (a String in the System.out,println line), and the heap is almost full, so a new OOME is thrown. This is the worst case scenario: we tried to create a new object, we failed, we catched the OOME, yes, but now the first instruction requiring new heap memory (e.g: creating a new object) will throw a new OOME. Think about it, what else can we do at this point with so little memory left?. Probably just exiting. Hence the useless.

Among the reasons of the JVM not collecting resources, one is really scary: a shared resource with other threads also making use of it. Anyone with a brain can see how dangerous catching OOME can be if inserted on some non-experimental app of any kind.

I'm using a Windows x86 32bits JVM (JRE6). Default memory for each Java app is 64MB.


The only reason i can think of why catching OOM errors could be that you have some massive data structures you're not using anymore, and can set to null and free up some memory. But (1) that means you're wasting memory, and you should fix your code rather than just limping along after an OOME, and (2) even if you caught it, what would you do? OOM can happen at any time, potentially leaving everything half done.


For the question 2 I already see the solution I would suggest, by BalusC.

  1. Is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?

I think I just came across a good example. When awt application is dispatching messages the uncatched OutOfMemoryError is displayed on stderr and the processing of the current message is stopped. But the application keeps running! User may still issue other commands unaware of the serious problems happening behind the scene. Especially when he cannot or does not observe the standard error. So catching oom exception and providing (or at least suggesting) application restart is something desired.


I just have a scenario where catching an OutOfMemoryError seems to make sense and seems to work.

Scenario: in an Android App, I want to display multiple bitmaps in highest possible resolution, and I want to be able to zoom them fluently.

Because of fluent zooming, I want to have the bitmaps in memory. However, Android has limitations in memory which are device dependent and which are hard to control.

In this situation, there may be OutOfMemoryError while reading the bitmap. Here, it helps if I catch it and then continue with lower resolution.


  1. Depends on how you define "good". We do that in our buggy web application and it does work most of the time (thankfully, now OutOfMemory doesn't happen due to an unrelated fix). However, even if you catch it, it still might have broken some important code: if you have several threads, memory allocation can fail in any of them. So, depending on your application there is still 10--90% chance of it being irreversibly broken.
  2. As far as I understand, heavy stack unwinding on the way will invalidate so many references and thus free so much memory you shouldn't care about that.

EDIT: I suggest you try it out. Say, write a program that recursively calls a function that allocates progressively more memory. Catch OutOfMemoryError and see if you can meaningfully continue from that point. According to my experience, you will be able to, though in my case it happened under WebLogic server, so there might have been some black magic involved.


JVM has provided helpful arguments to deal with OutOfMemoryError. In this article, we would like to highlight those JVM arguments. Those JVM arguments are:

  1. -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath
  2. -XX:OnOutOfMemoryError
  3. -XX:+ExitOnOutOfMemoryError
  4. -XX:+CrashOnOutOfMemoryError

When you pass this argument, JVM will exit right when OutOfMemoryError is thrown. You may pass this argument if you would like to terminate the application. But personally, I wouldn’t prefer configuring this argument, because we should always aim to achieve a graceful exit. Abrupt exit can/will jeopardize transactions that are in motion.

OutOfMemory related JVM arguments


You can catch anything under Throwable, generally speaking you should only catch subclasses of Exception excluding RuntimeException (though a large portion of developers also catch RuntimeException... but that was never the intent of the language designers).

If you were to catch OutOfMemoryError what on earth would you do? The VM is out of memory, basically all you can do is exit. You probably cannot even open a dialog box to tell them you are out of memory since that would take memory :-)

The VM throws an OutOfMemoryError when it is truly out of memory (indeed all Errors should indicate unrecoverable situations) and there should really be nothing you can do to deal with it.

The things to do are find out why you are running out of memory (use a profiler, like the one in NetBeans) and make sure you don't have memory leaks. If you don't have memory leaks then increase the memory that you allocate to the VM.

참고URL : https://stackoverflow.com/questions/2679330/catching-java-lang-outofmemoryerror

반응형