Reputation: 792
I have a table called relevant_tables
that resembles:
schema_name table_name
AAA AA
AAA AB
AAA AC
BBB BA
I came across the information_schema.columns
table which will allow me to get the list of columns for tables if I specify WHERE table_name = 'my_table' AND table_schema = 'my_schema'
.
I want to get all the columns for all the tables specifically in my relevant_tables
table, as in the below intended output:
schema_name table_name column_name
AAA AA A
AAA AA B
AAA AA C
AAA AB A
AAA AC A
BBB BA A
So I tried to join the schema_name
and table_name
as follows:
SELECT
c.table_schema,
c.table_name,
column_name
FROM information_schema.columns c
JOIN relevant_tables r
ON c.table_schema = r.schema_name
AND c.table_name = r.table_name
However, I'm getting the following error:
[0A000] ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables.
Why am I getting this error and how do I achieve my intended output?
Upvotes: 0
Views: 1668
Reputation: 11032
This error occurs when leader-only data is required in the compute nodes (for your join). There isn't a path for queries to progress this way. There may be a compute node equivalent table or you can read the info from a cursor like - How to join System tables or Information Schema tables with User defined tables in Redshift
Update / more detail:
Let me try to add some clarity around how you might workaround this issue. There are 3 ways I can see:
I recommend you think about #3 (the long-term picture) but you likely just need to get this to work asap. If someone knows of a system table that has column and table and schema names that IS available to compute nodes a lot of people are interested but I wouldn't spend a lot of time driving down that path because many have before you. This leaves #2 - create a normal table with the data you need. If you understand cursors (at least a little) then the stored procedure is nice as you just need to call the procedure and the normal table version of the system data will be recreated (though not quickly). If this is too much then having a little SQL that UNLOADs the system table info to S3 and then loads (COPY) it back in will work just fine - just need a space in S3 where you can dump the file(s).
Hope this helps
Upvotes: 1