有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

java JMH微基准递归快速排序

您好,我正在尝试对各种排序算法进行微基准测试,我发现jmh和基准测试quicksort存在一个奇怪的问题。也许我的实现有问题。如果有人能帮我看看问题出在哪里,我会很感兴趣的。首先,我将ubuntu 14.04与jdk 7和jmh 0.9.1结合使用。 以下是我如何尝试进行基准测试:

@OutputTimeUnit(TimeUnit.MILLISECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 3, time = 1)
@State(Scope.Thread)
public class SortingBenchmark {

private int length = 100000;

private Distribution distribution = Distribution.RANDOM;

private int[] array;

int i = 1;

@Setup(Level.Iteration)
public void setUp() {
    array = distribution.create(length);
}

@Benchmark
public int timeQuickSort() {
    int[] sorted = Sorter.quickSort(array);
    return sorted[i];
}

@Benchmark
public int timeJDKSort() {
    Arrays.sort(array);
    return array[i];
}

public static void main(String[] args) throws RunnerException {
    Options opt = new OptionsBuilder().include(".*" + SortingBenchmark.class.getSimpleName() + ".*").forks(1)
            .build();

    new Runner(opt).run();
}
}

还有其他的算法,但我把它们遗漏了,因为它们或多或少是好的。由于某些原因,现在快速排序速度非常慢。时间的大小变慢了!更重要的是,我需要分配更多的堆栈空间,以便它在没有StackOverflowException的情况下运行。出于某种原因,quicksort似乎只执行大量递归调用。有趣的是,当我在我的主类中简单地运行算法时,它运行得很好(具有相同的随机分布和100000个元素)。不需要增加堆栈,简单的nanotime基准测试显示的时间与其他算法非常接近。在基准测试中,当使用jmh进行测试时,JDK排序速度非常快,并且与其他使用naive nanotime基准测试的算法更加一致。我是做错了什么还是错过了什么? 以下是我的快速排序算法:

public static int[] quickSort(int[] data) {
    Sorter.quickSort(data, 0, data.length - 1);
    return data;
}
private static void quickSort(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    if (sublistFirstIndex < sublistLastIndex) {
        // move smaller elements before pivot and larger after
        int pivotIndex = partition(data, sublistFirstIndex, sublistLastIndex);
        // apply recursively to sub lists
        Sorter.quickSort(data, sublistFirstIndex, pivotIndex - 1);
        Sorter.quickSort(data, pivotIndex + 1, sublistLastIndex);
    }
}
private static int partition(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    int pivotElement = data[sublistLastIndex];
    int pivotIndex = sublistFirstIndex - 1;
    for (int i = sublistFirstIndex; i < sublistLastIndex; i++) {
        if (data[i] <= pivotElement) {
            pivotIndex++;
            ArrayUtils.swap(data, pivotIndex, i);
        }
    }
    ArrayUtils.swap(data, pivotIndex + 1, sublistLastIndex);
    return pivotIndex + 1; // return index of pivot element
}

现在我明白了,由于我的轴选择,如果我在已经排序的数据上运行算法,那么我的算法将非常慢(O(n^2))。但我仍然在随机化的方法上运行它,甚至当我在我的主要方法中尝试在排序数据上运行它时,它比在随机化数据上使用jmh的版本快得多。我很确定我错过了什么。您可以在此处找到具有其他算法的完整项目:https://github.com/ignl/SortingAlgos/


共 (1) 个答案

  1. # 1 楼答案

    好吧,既然这里真的应该有一个答案(而不是必须通过问题下面的评论),我把它放在这里,因为我被烧伤了

    JMH中的迭代是一批基准方法调用(取决于迭代设置的时间)。因此,使用@Setup(Level.Iteration)只会在一系列调用开始时进行设置。由于数组是在第一次调用后进行排序的,所以在后续调用中,在最糟糕的情况下(排序数组)会调用quicksort。这就是为什么要花这么长时间,否则会把事情搞砸

    因此,解决方案是使用@Setup(Level.Invocation)。然而,正如Javadoc中所述:

    **
         * Invocation level: to be executed for each benchmark method execution.
         *
         * <p><b>WARNING: HERE BE DRAGONS! THIS IS A SHARP TOOL.
         * MAKE SURE YOU UNDERSTAND THE REASONING AND THE IMPLICATIONS
         * OF THE WARNINGS BELOW BEFORE EVEN CONSIDERING USING THIS LEVEL.</b></p>
         *
         * <p>This level is only usable for benchmarks taking more than a millisecond
         * per single {@link Benchmark} method invocation. It is a good idea to validate
         * the impact for your case on ad-hoc basis as well.</p>
         *
         * <p>WARNING #1: Since we have to subtract the setup/teardown costs from
         * the benchmark time, on this level, we have to timestamp *each* benchmark
         * invocation. If the benchmarked method is small, then we saturate the
         * system with timestamp requests, which introduce artificial latency,
         * throughput, and scalability bottlenecks.</p>
         *
         * <p>WARNING #2: Since we measure individual invocation timings with this
         * level, we probably set ourselves up for (coordinated) omission. That means
         * the hiccups in measurement can be hidden from timing measurement, and
         * can introduce surprising results. For example, when we use timings to
         * understand the benchmark throughput, the omitted timing measurement will
         * result in lower aggregate time, and fictionally *larger* throughput.</p>
         *
         * <p>WARNING #3: In order to maintain the same sharing behavior as other
         * Levels, we sometimes have to synchronize (arbitrage) the access to
         * {@link State} objects. Other levels do this outside the measurement,
         * but at this level, we have to synchronize on *critical path*, further
         * offsetting the measurement.</p>
         *
         * <p>WARNING #4: Current implementation allows the helper method execution
         * at this Level to overlap with the benchmark invocation itself in order
         * to simplify arbitrage. That matters in multi-threaded benchmarks, when
         * one worker thread executing {@link Benchmark} method may observe other
         * worker thread already calling {@link TearDown} for the same object.</p>
         */ 
    

    因此,正如Aleksey Shipilev所建议的,将阵列拷贝成本吸收到每个基准方法中。因为你是在比较相对性能,所以这不应该影响你的结果