Reputation: 1437
I upgraded my CDH from 5.0.0 beta to 5.2.1 using Cloudera Manager. All services are running properly except hive metastore. I'm using MySQL for the hive metastore.
I'm not able to drop a table from hive when I fire a query like:
`drop table test;`
I get following error.
hive> drop table test;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDOException: Exception thrown when executing query
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
at org.apache.hadoop.hive.metastore.ObjectStore.deleteTableColumnStatistics(ObjectStore.java:6030)
at org.apache.hadoop.hive.metastore.ObjectStore.dropTable(ObjectStore.java:813)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at com.sun.proxy.$Proxy0.dropTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1386)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
at com.sun.proxy.$Proxy5.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column AO.BIG_DECIMAL_HIGH_VALUE in field list
Reference link
Can anyone suggest something?
Upvotes: 0
Views: 4298
Reputation: 26
I had exactly the same issue. I tried to do what was suggested in the reference link, but I still had the same problem (even when I fixed all the errors I could see when running the scripts).
Finally, I looked for that BIG_DECIMAL_HIGH_VALUE in the Hive metastore scripts, and I saw it in every hive-schema-0.1*.0.mysql.sql in the following create statements:
CREATE TABLE IF NOT EXISTS `TAB_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TBL_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE IF NOT EXISTS `PART_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PART_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
When I checked in the Hive metastore database for those tables I got something different:
mysql> desc TAB_COL_STATS;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| CS_ID | bigint(20) | NO | PRI | NULL | |
| AVG_COL_LEN | double | YES | | NULL | |
| COLUMN_NAME | varchar(128) | NO | | NULL | |
| COLUMN_TYPE | varchar(128) | NO | | NULL | |
| DB_NAME | varchar(128) | NO | | NULL | |
| DOUBLE_HIGH_VALUE | double | YES | | NULL | |
| DOUBLE_LOW_VALUE | double | YES | | NULL | |
| LAST_ANALYZED | bigint(20) | NO | | NULL | |
| LONG_HIGH_VALUE | bigint(20) | YES | | NULL | |
| LONG_LOW_VALUE | bigint(20) | YES | | NULL | |
| MAX_COL_LEN | bigint(20) | YES | | NULL | |
| NUM_DISTINCTS | bigint(20) | YES | | NULL | |
| NUM_FALSES | bigint(20) | YES | | NULL | |
| NUM_NULLS | bigint(20) | NO | | NULL | |
| NUM_TRUES | bigint(20) | YES | | NULL | |
| TBL_ID | bigint(20) | YES | MUL | NULL | |
| TABLE_NAME | varchar(128) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
17 rows in set (0.00 sec)
mysql> desc PART_COL_STATS;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| CS_ID | bigint(20) | NO | PRI | NULL | |
| AVG_COL_LEN | double | YES | | NULL | |
| COLUMN_NAME | varchar(128) | NO | | NULL | |
| COLUMN_TYPE | varchar(128) | NO | | NULL | |
| DB_NAME | varchar(128) | NO | | NULL | |
| DOUBLE_HIGH_VALUE | double | YES | | NULL | |
| DOUBLE_LOW_VALUE | double | YES | | NULL | |
| LAST_ANALYZED | bigint(20) | NO | | NULL | |
| LONG_HIGH_VALUE | bigint(20) | YES | | NULL | |
| LONG_LOW_VALUE | bigint(20) | YES | | NULL | |
| MAX_COL_LEN | bigint(20) | YES | | NULL | |
| NUM_DISTINCTS | bigint(20) | YES | | NULL | |
| NUM_FALSES | bigint(20) | YES | | NULL | |
| NUM_NULLS | bigint(20) | NO | | NULL | |
| NUM_TRUES | bigint(20) | YES | | NULL | |
| PART_ID | bigint(20) | YES | MUL | NULL | |
| PARTITION_NAME | varchar(767) | NO | | NULL | |
| TABLE_NAME | varchar(128) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
So since both tables were empty I just dropped and created them again, with the create statements listed in the hive-schema-0.13.0.mysql.sql file.
After doing so I could drop the Hive tables again.
Upvotes: 1