[gpfsug-discuss] GPFS GUI
Markus Rohwedder
rohwedder at de.ibm.com
Wed May 17 17:00:12 BST 2017
Hello all,
if multiple collectors should work together in a federation, the collector
peers need to he specified in the ZimonCollectors.cfg.
The GUI will see data from all collectors if federation is set up.
See documentation below in the KC (works in 4.2.2 and 4.2.3 alike):
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adv_federation.htm
For the issue related to the nodes count, can you contact me per PN?
Mit freundlichen Grüßen / Kind regards
Markus Rohwedder
IBM Spectrum Scale GUI Development
From: "David D. Johnson" <david_johnson at brown.edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 17.05.2017 13:59
Subject: Re: [gpfsug-discuss] GPFS GUI
Sent by: gpfsug-discuss-bounces at spectrumscale.org
I have issues as well with the gui. The issue that I had most similar to
yours
came about because I had installed the collector RPM and enabled collectors
on
two server nodes, but the GUI was only getting data from one of them. Each
client
randomly selected a collector to deliver data to.
So how are multiple collectors supposed to work? Active/Passive? Failover
pairs?
Shared storage? Better not be on GPFS… Maybe there is a place in the gui
config
to tell it to keep track of multiple collectors, but I gave up looking and
turned of the
second collector service and removed it from the candidates.
Other issue I mentioned before is that it is totally confused about how
many nodes are
in the cluster (thinks 21, with 3 unhealthy) when there are only 12 nodes
in all, all healthy.
The nodes dashboard never finishes loading, and no means of digging deeper
(text based
info) to find out why it is wedged.
— ddj
On May 17, 2017, at 7:44 AM, Wilson, Neil <
neil.wilson at metoffice.gov.uk> wrote:
Hello all,
Does anyone have any experience with troubleshooting the new GPFS
GUI?
I’ve got it up and running but have a few weird problems with it...
Maybe someone can help or point me in the right direction?
1. It keeps generating an alert saying that the cluster
is down, when it isn’t??
Event name: gui_cluster_down
Component: GUI
Entity type: Node
Entity name:
Event time: 17/05/2017 12:19:29
Message: The GUI detected that the cluster is
down.
Description: The GUI checks the cluster state.
Cause: The GUI calculated that an insufficient
amount of quorum nodes is up and
running.
User action: Check why the cluster lost quorum.
Reporting
node:
Event type: Active health state of an entity which
is monitored by the system.
2. It is collecting sensor data from the NSD nodes
without any issue, but it won’t collect sensor data from any of
the client nodes?
I have the pmsensors package installed on all the nodes in
question , the service is enabled and running – the logs
showing that it has connected to the collector.
However in the GUI it just says “Performance collector did not
return any data”
3. The NSD nodes are returning performance data, but are
all displaying a state of unknown.
Would be great if anyone has any experience or ideas on how to
troubleshoot this!
Thanks
Neil
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170517/977eb716/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170517/977eb716/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170517/977eb716/attachment-0005.gif>
More information about the gpfsug-discuss
mailing list