thisisshantzz
thisisshantzz

Reputation: 1097

TApplicationException exception when running a mapreduce job on an Accumulo Table

I am running a map reduce job taking data from a table in Accumulo as the input and storing the result in another table in Accumulo. To do this, I am using the AccumuloInputFormat and AccumuloOutputFormat classes. Here is the code

public int run(String[] args) throws Exception {

        Opts opts = new Opts();
        opts.parseArgs(PivotTable.class.getName(), args);

        Configuration conf = getConf();

        conf.set("formula", opts.formula);

        Job job = Job.getInstance(conf);

        job.setJobName("Pivot Table Generation");
        job.setJarByClass(PivotTable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        job.setMapperClass(PivotTableMapper.class);
        job.setCombinerClass(PivotTableCombiber.class);
        job.setReducerClass(PivotTableReducer.class);

        job.setInputFormatClass(AccumuloInputFormat.class);

        ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());

        AccumuloInputFormat.setInputTableName(job, opts.dataTable);
        AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));

        job.setOutputFormatClass(AccumuloOutputFormat.class);

        BatchWriterConfig bwConfig = new BatchWriterConfig();

        AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
        AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
        AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
        AccumuloOutputFormat.setCreateTables(job, true);

        return job.waitForCompletion(true) ? 0 : 1;
    }

PivotTable is the name of the class that contains the main method (and this one too). I have made the mapper, combiner and reducer classes as well. But when I try to exectute this job, I get an error

Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
        ... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)

Can someone tell me what am I doing wrong here? Any help would be appreciated.

EDIT : I am running Accumulo 1.7.0

Upvotes: 1

Views: 754

Answers (1)

Christopher
Christopher

Reputation: 2512

A TApplicationException indicates the error occurred on the Accumulo tablet server, rather than in your client (MapReduce) code. You'll need to examine your tablet server logs to get more information about the particular error wherever you see TApplicationException.

Table permissions are usually retrieved from ZooKeeper, so it may indicate a problem with the tserver connecting to ZooKeeper.

Unfortunately, I don't see the hostname or the IP in the stack trace, so you may have to check all the tserver logs to find it.

Upvotes: 1

Related Questions