[gpfsug-discuss] CES and Directory list populating very slowly
Marc A Kaplan
makaplan at us.ibm.com
Tue May 9 19:58:22 BST 2017
If you haven't already, measure the time directly on the CES node command
line skipping Windows and Samba overheads:
time ls -l /path
or
time ls -lR /path
Depending which you're interested in.
From: "Sven Oehme" <oehmes at us.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 05/09/2017 01:01 PM
Subject: Re: [gpfsug-discuss] CES and Directory list populating
very slowly
Sent by: gpfsug-discuss-bounces at spectrumscale.org
ESS nodes have cache, but what matters most for this type of workloads is
to have a very large metadata cache, this resides on the CES node for
SMB/NFS workloads. so if you know that your client will use this 300k
directory a lot you want to have a very large maxfilestocache setting on
this nodes. alternative solution is to install a LROC device and configure
a larger statcache, this helps especially if you have multiple larger
directories and want to cache as many as possible from all of them.
make sure you have enough tokenmanager and memory on them if you have
multiple CES nodes and they all will have high settings.
sven
------------------------------------------
Sven Oehme
Scalable Storage Research
email: oehmes at us.ibm.com
Phone: +1 (408) 824-8904
IBM Almaden Research Lab
------------------------------------------
Mark Bush ---05/09/2017 05:25:39 PM---I have a customer who is struggling
(they already have a PMR open and it’s being actively worked on
From: Mark Bush <Mark.Bush at siriuscom.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 05/09/2017 05:25 PM
Subject: [gpfsug-discuss] CES and Directory list populating very slowly
Sent by: gpfsug-discuss-bounces at spectrumscale.org
I have a customer who is struggling (they already have a PMR open and it’s
being actively worked on now). I’m simply seeking understanding of
potential places to look. They have an ESS with a few CES nodes in front.
Clients connect via SMB to the CES nodes. One fileset has about 300k
smallish files in it and when the client opens a windows browser it takes
around 30mins to finish populating the files in this SMB share.
Here’s where my confusion is. When a client connects to a CES node this is
all the job of the CES and it’s protocol services to handle, so in this
case CTDB/Samba.
But the flow of this is where maybe I’m a little fuzzy. Obviously the CES
nodes act as clients to the NSD (IO/nodes in ESS land) servers. So, the
data really doesn’t exist on the protocol node but passes things off to
the NSD server for regular IO processing. Does the CES node do some type
of caching? I’ve heard talk of LROC on CES nodes potentially but I’m
curious if all of this is already being stored in the pagepool?
What could cause a mostly metadata related simple directory lookup take
what seems to the customer a long time for a couple hundred thousand
files?
Mark
This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential, and
exempt from disclosure under applicable law. If you are not the intended
recipient, you are hereby notified that any use, dissemination,
distribution, or copying of this communication is strictly prohibited.
This message may be viewed by parties at Sirius Computer Solutions other
than those named in the message header. This message does not contain an
official representation of Sirius Computer Solutions. If you have received
this communication in error, notify Sirius Computer Solutions immediately
and (i) destroy this message if a facsimile or (ii) delete this message
immediately if this is an electronic communication. Thank you.
Sirius Computer Solutions _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170509/6bd46dde/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 21994 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170509/6bd46dde/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170509/6bd46dde/attachment-0005.gif>
More information about the gpfsug-discuss
mailing list