Reputation: 377
I have a JSON object that's written in a weird way.
> {"custom": [ { "name": "addressIdNum", "valueNum": 12345678}, {
> "name": "cancelledDateAt", "valueAt": "2017-02-30T01:43:04.000Z" }] }
Not sure how to parse something like this. The keys are addressIdNum and cancelledDateAt and the values are 12345678 and 2017-02-30T01:43:04.000Z respectively.
How do I parse this using Snowflake SQL?
Thanks for all your help!
Best, Preet Rajdeo
Upvotes: 4
Views: 3091
Reputation: 169
This approach tackles elements in the array in different order without using UDFs. For more documentation refer to https://docs.snowflake.com/en/sql-reference/functions/filter
with input as (
select parse_json(
'{"custom": [ { "name": "addressIdNum", "valueNum": 12345678}, {"name": "cancelledDateAt", "valueAt": "2024-04-05 01:02:03" }] }')
as json)
select
json:custom as value,
filter(
value,
a -> a:name::string = 'addressIdNum'
)[0]:valueNum::integer as address_id_num,
cast(
filter(
value,
a -> a:name::string = 'cancelledDateAt'
)[0]:valueAt as string)::timestamp as cancelled_date_at
from input;
Upvotes: 0
Reputation: 4729
If your input is ALWAYS in this form (two elements in an array, with the same fields in the same element), you can combine PARSE_JSON function and the path access.
Just try this:
with input as (
select parse_json(
'{"custom": [ { "name": "addressIdNum", "valueNum": 12345678}, {"name": "cancelledDateAt", "valueAt": "2017-02-30T01:43:04.000Z" }] }')
as json)
select json:custom[0].valueNum::integer, json:custom[1].valueAt::timestamp from input;
----------------------------------+-----------------------------------+
JSON:CUSTOM[0].VALUENUM::INTEGER | JSON:CUSTOM[1].VALUEAT::TIMESTAMP |
----------------------------------+-----------------------------------+
12345678 | 2017-03-01 17:43:04 |
----------------------------------+-----------------------------------+
However, if the structure of your data might be different (e.g. elements in the array might be in a different order), it's probably best to write a JavaScript UDF in Snowflake to convert such messy data into something easier.
Upvotes: 3