I install wso2 am 1.9.1 and wso2 bam 2.5 on the same linux machine, and config wso2 am and bam as https://docs.wso2.com/display/AM190/Publishing+API+Runtime+Statistics describes. But when I start wso2 bam, the script am_stats_analyzer is running again, and no error is reported. On the wso2 am part, it shows that statistic is not configured.
The java version is Oracle jdk 1.7.80. And running as root. The log is below, and those will printed again and again, please help me!
logs
[2015-12-21 02:22:00,005] INFO {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} - Running script executor task for script **am_stats_analyzer**.
[Mon Dec 21 02:22:00 CST 2015]Hive history file=/home/wso2bam-2.5.0/tmp/hive/root-querylogs/hive_job_log_root_201512210222_2145444007.txt
OK
OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
log4j:WARN Please initialize the log4j system properly.
Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:07,801] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2015-12-21 02:22:10,999 null map = 0%, reduce = 0%
2015-12-21 02:22:14,001 null map = 100%, reduce = 0%
2015-12-21 02:22:20,004 null map = 100%, reduce = 100%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
log4j:WARN Please initialize the log4j system properly.
Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:24,419] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2015-12-21 02:22:27,574 null map = 0%, reduce = 0%
2015-12-21 02:22:30,576 null map = 100%, reduce = 0%
2015-12-21 02:22:36,579 null map = 100%, reduce = 100%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
log4j:WARN Please initialize the log4j system properly.
Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:40,883] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2015-12-21 02:22:43,945 null map = 0%, reduce = 0%
2015-12-21 02:22:46,947 null map = 100%, reduce = 0%
2015-12-21 02:22:52,950 null map = 100%, reduce = 100%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
OK
OK
Total MapReduce jobs = 1
There may be several reasons for not showing the statistics data in the APIM publisher portal. Because the stats tables are not created using hive scripts, which is scheduled to run every two minutes (CORN Expression 0 0/2 * * * ? in am_stats_analyzer). The data are not inserted into the respective tables. Therefore you must (Curl commands/Advance Rest client ) invoke the api. Once the api hit the request the stats values are get inserted into the tables created under TestStatsDB schema.