There is a problem processing audits for HIVESERVER2.
Getting this?
There is a problem processing audits for HIVESERVER2.
[02/Sep/2019 12:36:30 +0000] 32165 Audit-Plugin throttling_logger ERROR (341 skipped) Error occurred when sending entry to server:
Diggig further, we see this error as well:
[02/Sep/2019 11:31:55 +0000] 4044 Profile-Plugin navigator_plugin INFO Pipelines updated for Profile Plugin: set([])
[02/Sep/2019 11:31:55 +0000] 4044 Audit-Plugin navigator_plugin_pipeline INFO Starting with navigator log None for role HIVESERVER2 and pipeline HiveSentryOnFailureHookTP
[02/Sep/2019 11:31:55 +0000] 4044 Metadata-Plugin navigator_plugin ERROR Exception caught when trying to refresh Metadata Plugin for conf.cloudera.spark_on_yarn with count 0 pipelines names [].
Traceback (most recent call last):
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/audit/navigator_plugin.py", line 198, in immediate_refresh
self._recreate_pipelines_for_csd()
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/audit/navigator_plugin.py", line 157, in _recreate_pipelines_for_csd
existing_logs = [name for name in os.listdir(self.nav_conf.log_dir)
AttributeError: 'NoneType' object has no attribute 'log_dir'
[02/Sep/2019 11:31:55 +0000] 4044 Metadata-Plugin navigator_plugin INFO Pipelines updated for Metadata Plugin: []
Redeploying the Spark config should solve this:
Execute DeployClusterClientConfig for {yarn,solr,hbase,kafka,hdfs,hive,spark_on_yarn} in parallel.
We can only surmise what may have occurred in this case. Apparently config updates were being done to the config while an earlier config deployment was happening, corrupting the setup. This may not solve it, however. YMMV.
In all likelihood, your free license has expired. In that case navigate to Cloudera Management Service then turn off / uncheck the following:
Navigator Audit Server Role Health Test
But that wasn't it either. Finally, remove the Navigator Audit Server from Cloudera Management Services instances since no valid license exists.
Cheers,
TK