Reputation: 108
I have one doubt about how to persist data in an architecture where Cygnus is subscribed to Orion Context Broker and then Cygnus must persist data in Cosmos. Is it necessary to implement a custom WebHDFS client for persisting the data from Cygnus to Cosmos or can it be automatically stored if we configure Cosmos via CLI? After reading some documentation I don't know if this "last step" can be done through configuration using CLI or if a custom client is needed. When could be not necessary a custom WebHDFS client?
Upvotes: 3
Views: 194
Reputation: 3798
As said, Cygnus subscribes to Orion in order to receive notifications about certain desired entities, when any of their attributes changes.
What happens then? Cygnus uses WebHDFS REST API for writting data into Cosmos HDFS, typically a file per notified entity. Initially, if the file does not exits, the "create" operation from the REST API is used; if it already exists, the "append" operation is used.
Where are the above files created? Cygnus HDFS files path is as:
/user/<your_cosmos_username>/<notified_fiware_service>/<notified_fiware_servicePath>/<built_destination>/<built_destination>.txt
The notified_fiware_service
and notified_fiware_servicePath
are Http headers sent by Orion in the notification; they are about how to organize the data. The built_destination
is usually the result of concatenating the notified entityId
and the entityType
.
Finally, your_cosmos_username
is your Linux and HDFS username in the FIWARE LAB Cosmos deployment. This is obtained by login with your FIWARE LAB credentials at http://cosmos.lab.fi-ware.org/cosmos-gui/. You only have to this once in your life; it is, let's say, a provisioning step that creates the Unix username and your HDFS userspace.
Upvotes: 1