使用Beam数据流和Put的java单元测试总是失败
我正在使用ApacheBeam数据流创建一个管道,该管道从GCP PubSub读取数据并写入BigTable实例中的表
现在,我正在尝试构建一个单元测试,以检查接收到的数据是否已很好地转换为BigTable模式。问题是,我正在构建的单元测试(我觉得应该是非常直接的)在将生成的突变与预期的突变进行比较时总是失败
Caused by: java.lang.AssertionError:
Expected: iterable over [<KV{name:value, [{"totalColumns":1,"row":"name:value","families":{"F":[{"qualifier":"name","vlen":5,"tag":[],"timestamp":"1"}]}}]}>] in any order
but: Not matched: <KV{name:value, [{"totalColumns":1,"row":"name:value","families":{"F":[{"qualifier":"name","vlen":5,"tag":[],"timestamp":"1"}]}}]}>
我做了一个非常简单的测试来证明这一点,但由于上述错误而失败
package com.energyworx.dataflow.transforms.integration;
import org.apache.beam.sdk.testing.NeedsRunner;
import org.apache.beam.sdk.testing.PAssert;
import org.apache.beam.sdk.testing.TestPipeline;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.hadoop.hbase.client.Mutation;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Rule;
import org.junit.Test;
import org.junit.experimental.categories.Category;
import java.util.Collections;
public class SimpleTest {
@Rule
public final transient TestPipeline pipeline = TestPipeline.create();
private static final byte[] COLUMN_FAMILY_BYTES = Bytes.toBytes("F");
private static final String INPUT_VALUE = "name:value";
private static class TestDoFn extends DoFn<String, KV<String, Iterable<Mutation>>> {
@ProcessElement
public void processElement(ProcessContext c) {
String element = c.element();
Mutation row = createRow(element);
c.output(KV.of(element, Collections.singletonList(row)));
}
}
@Test
@Category(NeedsRunner.class)
public void testMutations() {
PCollection<KV<String, Iterable<Mutation>>> output = pipeline
.apply(Create.of(INPUT_VALUE))
.apply(ParDo.of(new TestDoFn()));
Mutation row = createRow(INPUT_VALUE);
PAssert.that(output).containsInAnyOrder(KV.of(INPUT_VALUE, Collections.singletonList(row)));
pipeline.run();
}
static Mutation createRow(String rowKey) {
Put row = new Put(Bytes.toBytes(rowKey));
String[] parts = rowKey.split(":");
row.addColumn(COLUMN_FAMILY_BYTES, Bytes.toBytes(parts[0]), 1L, Bytes.toBytes(parts[1]));
return row;
}
}
由于这是失败的,我尝试了一些不同的方法,比如创建一个新的DoFn,它接收预期的值并根据管道值断言,尝试使用PAssert.that(output).satisfies(expectedValues)
并编写一个自定义函数,但是这两种方法都失败了Caused by: java.io.NotSerializableException: org.apache.hadoop.hbase.client.Put
我觉得这是一个非常简单的用例,我遗漏了什么或者我做错了什么
有关于我做错了什么的提示吗
共 (0) 个答案