Gabber
Gabber

Reputation: 7249

Hive:Getting error in execution select and drop partiton hive queries in same time

I am getting error when running two query in same time.

Here are the scenerios.

I am using AWS EMR and below is my hive table schema.

CREATE TABLE India (OFFICE_NAME STRING,
OFFICE_STATUS     STRING,
PINCODE           INT,
TELEPHONE   BIGINT,
TALUK       STRING,
DISTRICT    STRING,
POSTAL_DIVISION   STRING,
POSTAL_REGION     STRING,
POSTAL_CIRCLE     STRING
)
PARTITIONED BY (STATE   STRING)
ROW FORMAT SERDE       'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' 
STORED AS INPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' 
OUTPUTFORMAT   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION  's3a://mybucket/'
TBLPROPERTIES (  'parquet.compression'='SNAPPY',   'transient_lastDdlTime'='1537781726');

First query:

SELECT count( distinct STATE ) FROM India;

Second query:

ALTER TABLE India DROP PARTITION (STATE='Delhi');

While running the first query, I have executed the 2nd query in same time, so I got this error in first query

Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
  at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
  at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
  at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:144)
  at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:200)
  at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:186)
  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:455)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
  at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
  ... 11 more
Caused by: com.amazon.ws.emr.hadoop.fs.consistency.exception.FileDeletedInMetadataNotFoundException: File 'mybucket/India/state=Delhi/000000_0' is marked as deleted in the metadata
  at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:440)
  at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:416)
  at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  at com.sun.proxy.$Proxy34.getFileStatus(Unknown Source)
  at com.amazon.ws.emr.hadoop.fs.s3n2.S3NativeFileSystem2.getFileStatus(S3NativeFileSystem2.java:227)
  at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:509)
  at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
  at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:372)
  at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
  at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:75)
  at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:60)
  at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75)
  at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:99)
  ... 15 more

after googled I found this link

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emrfs-files-tracked.html

Is there anyway to syn metadata at runtime or 2nd query won't be execute until the first one status is completed.

Please help me to fix this issue or any suggestion, set any parameter that will fix the issue.

Upvotes: 1

Views: 391

Answers (1)

leftjoin
leftjoin

Reputation: 38335

Partition path and splits are being calculated at the very beginning. Your mappers have started to read files in partition location and at the same time you dropped partition, this caused drop files, because your table is managed. This causes runtime FileDeletedInMetadataNotFoundException Exception.

If you still want to drop partition during reading it, try this:

If you make your table EXTERNAL then DROP PARTITION should not drop the files, they will remain and it should not cause exception, and you can remove partition location from filesystem later. Or use S3 Lifecycle policy to drop old files like described here.

Unfortunately, already started job cannot detect that Hive partition with files was dropped and gracefully skip reading them, because Hive metadata has already been read, query plan built and splits can be already calculated.

So, the solution is to drop Hive partitions and postpone dropping files.

BTW when you add partitions when querying table, it works fine.

Upvotes: 2

Related Questions