How to fill Cygnus.conf

200 views Asked by At

Some days ago I was able to run cygnus, on my Context Broker vm, like that the documentation says. All suscriptions between cygnus and CB are done without problems, and the notifications that the CB sends, reach cygnus.

My doubt is when I have to configure cygnus.conf, I think that, the failures that I have, when Cygnus sends data to COSMOS are related with this archive's fields configuration. The next file is the template to fill, available in the documentation:

### ============================================
###OrionHDFSSink configuration
###channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
### sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
### Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
cygnusagent.sinks.hdfs-sink.cosmos_host = x1.y1.z1.w1,x2.y2.z2.w2
###port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
###default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = default
###default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = xxxxxxxxxxxxx
###HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
### how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
###Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = x.y.z.w
### Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000

### ============================================

But for me is not clear what direction i have to put in the next field:

### Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
cygnusagent.sinks.hdfs-sink.cosmos_host = x1.y1.z1.w1,x2.y2.z2.w2

and I also don't know if the hive server field, the direction that i need to write is the same that fiware COSMOS instance's ip address:

### Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = x.y.z.w
1

There are 1 answers

3
fgalan On

Looking to the BigData Quick Start documentation, it seems that the value for cosmos_host in the case of using FIWARE Lab Cosmos instance is: cosmos.lab.fi-ware.org.

Regarding Hive, it is said:

Or remotelly, by developing a Hive client (typically, using JDBC, but there are some other options for other non Java programming languages) connecting to cosmos.lab.fi-ware.org:10000.

so I guess that the hive_host is the same (cosmos.lab.fi-ware.org).

Finally, take into account the following:

In addition, all the documented connections to such global instance (except for ssh connections and the Cosmos portal) must be done from a FI-LAB virtual machine; on the contrary, the firewall will stop them.

which means that you should run Cygnus from a VM inside FIWARE Lab.