Reputation: 339
I have created a .ktr file which reads data from one DB and loads data to another DB. Environment is linux. If I run the .ktr file manually it works properly reading value of both source and destination DB from kettle.properties file. Now If i schedule .sh file in cron which should run automatically to load the required records. But that gets failed. I am assuming that its not able to pick the values when scheduled in cron from kettle.properties file. Do i need to restart anything on linux machine?
I get below error
Unable to parse URL jdbc:postgresql://${DEST_DB_IP}:${DEST_DB_PORT}/${DEST_DB_NAME}
But why this error comes? If i scheduled in cron and it works properly when i run the sh file manually.
Upvotes: 0
Views: 225
Reputation: 1196
Variables must be defined outside of the Transformation that wants to access them.
Usually, a Transformation is used as a building block in a Job. That way we can run a Set-Variables job entry somewhere in front of the Transformation.
An alternative would be to use named parameters. Their values can be provided in a Pan command line when they are defined in the transformation settings via Spoon. We can access parameters exactly like variables.
From an architectural view point, I recommend always to use a Job.
PS: I can't recommend adding global variables to kettle.properties, because that file will be overwritten when a new version is installed. Also, the default location is ~/.kettle which most likely isn't the same folder for cron and an interactive user. Multiple users of a Kettle installation can access the same folder when environment variable KETTLE_HOME is configured, but still I can't recommend.
Upvotes: 0