Reputation: 1486
I am tryin to create a stream as select (CSAS), the stream is created successfully, but when I tried to push messages I am getting the below exception.
Caused by: org.apache.kafka.connect.errors.DataException: Struct schemas do not match.
at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:247)
at org.apache.kafka.connect.data.Struct.put(Struct.java:216)
at io.confluent.ksql.serde.GenericRowSerDe$GenericRowSerializer.serialize(GenericRowSerDe.java:116)
at io.confluent.ksql.serde.GenericRowSerDe$GenericRowSerializer.serialize(GenericRowSerDe.java:93)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:162)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:102)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
Below are the main stream, the persistent stream and the udf function details from the ksql-cli, not sure why the schema is not compatible, as you can see below the processed
stream have a field called article
with schema exactly as the same as the returned value from the UDF function, Am I missing something here.
ksql> create stream main_stream ( article struct< _id VARCHAR, title VARCHAR, text VARCHAR, action VARCHAR, url VARCHAR, feed_id VARCHAR, mode VARCHAR, score INTEGER, published_at VARCHAR, retrieved_at VARCHAR> ) with (KAFKA_TOPIC='articles', value_format='JSON');
ksql> create stream processed as select test(article) article from main_stream;
ksql> describe processed;
Name : processed
Field | Type
-------------------------------------------------------------------------------------------------------------------------------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
ARTICLE | STRUCT<_ID VARCHAR(STRING), RAW_TITLE VARCHAR(STRING), RAW_TEXT VARCHAR(STRING), PROCESSED_TITLE VARCHAR(STRING), PROCESSED_TEXT VARCHAR(STRING)>
-------------------------------------------------------------------------------------------------------------------------------------------------------------
For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>;
ksql> show queries;
Query ID | Kafka Topic | Query String
--------------------------------------------------------------------------------------------------------------------------------------------------------------
CSAS_processed_20 | processed | CREATE STREAM processed WITH (REPLICAS = 1, PARTITIONS = 1, KAFKA_TOPIC = 'processed') AS SELECT TEST(MAIN_STREAM.ARTICLE) "ARTICLE"
FROM MAIN_STREAM MAIN_STREAM;
--------------------------------------------------------------------------------------------------------------------------------------------------------------
ksql> describe function test;
Name : TEST
Overview : test udf
Type : scalar
Jar : /Users/ktawfik/libs/custom-udf.jar
Variations :
Variation : TEST(article STRUCT<_ID VARCHAR, TITLE VARCHAR, TEXT VARCHAR, ACTION VARCHAR, URL VARCHAR, FEED_ID VARCHAR, MODE VARCHAR, SCORE INT, PUBLISHED_AT VARCHAR, RETRIEVED_AT VARCHAR>)
Returns : STRUCT<_ID VARCHAR, RAW_TITLE VARCHAR, RAW_TEXT VARCHAR, PROCESSED_TITLE VARCHAR, PROCESSED_TEXT VARCHAR>
Description : test
article : A complete article object
Also below the UDF code I used
@Udf(description = "test",
schema = "struct< _id VARCHAR, raw_title VARCHAR, raw_text VARCHAR, processed_title VARCHAR, processed_text VARCHAR>")
public Struct processDocument(
@UdfParameter(
schema = "struct< _id VARCHAR, title VARCHAR, text VARCHAR, action VARCHAR, url VARCHAR, feed_id VARCHAR, mode VARCHAR, score INTEGER, published_at VARCHAR, retrieved_at VARCHAR>",
value = "article",
description = "A complete article object") Struct struct) {
Schema ARTICLE_SCHEMA = SchemaBuilder.struct()
.field("_id", Schema.STRING_SCHEMA)
.field("raw_title", Schema.STRING_SCHEMA)
.field("raw_text", Schema.STRING_SCHEMA)
.field("processed_title", Schema.STRING_SCHEMA)
.field("processed_text", Schema.STRING_SCHEMA)
.build();
Struct proStruct = new Struct(ARTICLE_SCHEMA);
proStruct.put("_id", "1234");
proStruct.put("raw_title", "RAW_TITLE___1234");
proStruct.put("raw_text", "RAW_TEXT___1234");
proStruct.put("processed_title", "TITLE____1234");
proStruct.put("processed_text", "TEXT____1234");
System.out.println(proStruct);
// Struct{_id=1234,raw_title=RAW_TITLE___1234,raw_text=RAW_TEXT___1234,processed_title=TITLE____1234,processed_text=TEXT____1234}
return proStruct;
}
Upvotes: 0
Views: 1046
Reputation: 1072
I was trying to solve the problem the same way but I'm struggling with the following case:
UDF:
@UdfDescription(name = "ValueUnpacker", description = "..")
public class ValueUnpacker {
private Schema valueSchema = SchemaBuilder.struct()
.field("LABEL", Schema.INT32_SCHEMA)
.build();
@Udf(description = "Test a string", schema = "struct<LABEL INT>")
public Struct unpackValue(@UdfParameter(value = "thingType", description = "a thing type") String thingType) {
Struct ret = new Struct(valueSchema);
int i = 5;
ret.put("LABEL", i);
System.out.println("Ret: " + ret);
return ret;
}
}
Running ksqldb
and type:
ksql> SELECT valueunpacker('test') FROM SOME_STREAM EMIT CHANGES;
|{LABEL=5}
|{LABEL=5}
and it works fine. But creating a stream as
CREATE STREAM CONSTANT_STREAM AS SELECT valueunpacker('test') FROM SOME_STREAM;
failes with no output of the stream. The log of ksqldb
result in the same problem: "Schemas do not match".
Upvotes: 1
Reputation: 1486
I was able to figure out the problem and solve it, basically it is the fact that KSQL engine translate schema fields to UPPER case, hence when I send fields with lower case it wasn't able to match it, which is not clear in the docs.
The fix is that I have to have:
@UDF
annotation.@UDF
annotation.Code finally looked like:
@Udf(description = "test",
schema = "struct< _ID VARCHAR, RAW_TITLE VARCHAR, RAW_TEXT VARCHAR, PROCESSED_TITLE VARCHAR, PROCESSED_TEXT VARCHAR>")
public Struct processDocument(
@UdfParameter(
schema = "struct< _id VARCHAR, title VARCHAR, text VARCHAR, action VARCHAR, url VARCHAR, feed_id VARCHAR, mode VARCHAR, score INTEGER, published_at VARCHAR, retrieved_at VARCHAR>",
value = "article",
description = "A complete article object") Struct struct) {
Schema ARTICLE_SCHEMA = SchemaBuilder.struct()
.field("_ID", Schema.STRING_SCHEMA)
.field("RAW_TITLE", Schema.STRING_SCHEMA)
.field("RAW_TEXT", Schema.STRING_SCHEMA)
.field("PROCESSED_TITLE", Schema.STRING_SCHEMA)
.field("PROCESSED_TEXT", Schema.STRING_SCHEMA)
.build();
Struct proStruct = new Struct(ARTICLE_SCHEMA);
proStruct.put("_ID", "1234");
proStruct.put("RAW_TITLE", "RAW_TITLE___1234");
proStruct.put("RAW_TEXT", "RAW_TEXT___1234");
proStruct.put("PROCESSED_TITLE", "TITLE____1234");
proStruct.put("PROCESSED_TEXT", "TEXT____1234");
System.out.println(proStruct);
return proStruct;
}
Upvotes: 0