Discussion:
[mapguide-users] Mapguide 2.5 poor performances on a multicore environment
Roberto Lombardini
2017-10-10 11:36:23 UTC
Permalink
We are currently managin a GIS web application running on mapguide server 2.5
installed over a Windows Server instance with 22 cores running at 2.6 Ghz
and 24Gb of ram.

We are currently serving a maximum of about 50-60 users. The data for the
various map layers are stored over a PostgreSQL DB instance running on the
same server. Currently we are serving from the same mgserver instance 3
different web server applications (running over Tomcat 7) and about 10
different maps with approximately 50 layers each.

We are experiencing severe performance issues. Even just retrieving a map
from the web browser can take a time variable from 8 to 40 seconds.
Unfortunately this is unacceptable as users need to work all day long with
this system and such a delay severely reduces their productivity.

Monitoring the system resources we noticed that neither the ram nor the CPU
usage is maxed out. During the moments of maximum stress mgserver.exe
consumes about 50% of cpu (out of a total of about 70% of cpu usage) with
about 254 threads attached (but only a small bunch of them are actually
consuming notable cpu resources, in the order of 3-4%).

We have tried to change some of the parameters of serverconfig.ini and
webclient.ini, in particular increasing the following values:
---------------------------
serverconfig.ini
---------------------------
*Admin connections:
MaxConnections = 40
ThreadPoolSize = 40

*Client connections:
MaxConnections = 120
ThreadPoolSize = 120

*Site connections:
MaxConnections = 80
ThreadPoolSize = 80

We have also tried unsuccessfully to use half these values, similar to the
1:3:2 ratio per core suggested on some web pages, however the response times
of the server remained slow and the cpu usage is still high but not maxed
out.

Furthermore, from the performance.log it seems that the cache size being
used is 6 out of a maximum of 100 so we excluded that bottleneck. Also the
number of concurrent sessions is about 110 out of 200 so even this does not
seem to be the cause of the slow performances.

It seems as if mapguide is spawning threads that, for some unknown reason,
are not fully using the avilable cpu resources.

Do you have ideas? What can we do to debug the problem?

Thank you.



--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Jackie Ng
2017-10-11 15:28:37 UTC
Permalink
The Site Administrator has a performance report feature. Do you see similar
performance numbers from this tool when profiling your map(s) in question?

- Jackie



--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Roberto Lombardini
2017-10-12 09:38:29 UTC
Permalink
Right now the response times of the profiling tool are somewhat similar to
the ones experienced by the users, around 6-8 seconds (but sometimes these
figures crack up to even 30-40 seconds and we haven't run some profiling in
these circumstances yet.)

We just run some profiling runs and we noticed that a couple of layers
pointing to quite huge postgresql tables (1.5 millions of records w/ a
spatial index set over the relevant geometry column) are taking quite a few
seconds to render.

However we also noticed that these times are very unstable. If we run the
profiler a few times consecutively the same layer can take from less than
800ms to more than 3000ms to render.

The strange (in our opinion) thing is that manually performing select
queries over the spatially indexed column of the db table leads to very
consistent results in terms of execution time, so we are puzzled on what may
be causing the variability in the performance reporting tool results.





--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Roberto Lombardini
2017-10-12 10:06:53 UTC
Permalink
Ok, right now it started to be slow again. 40 seconds to get a map.

However, the performance tool still shows numbers of about 6-8 seconds as
before. So there is a huge discrepancy between the tool and the user
experience. We are over multiple gigabit connections so connection speed
must be excluded.



--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Jackie Ng
2017-10-12 11:20:51 UTC
Permalink
You can enable FDO trace logging for the PostgreSQL provider if you want to
see the SQL being executed by the provider

https://themapguyde.blogspot.com.au/2013/01/mapguide-tidbits-fdo-rdbms-provider.html

Though I don't think this outputs the input geometry for spatial queries,
which may limit the usefulness of doing this, still it will tell you the SQL
it is executing vs what you think the provider is executing.

- Jackie



--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Jackie Ng
2017-10-13 15:34:35 UTC
Permalink
If FDO trace logging doesn't work for you, PostgreSQL should have some
logging facility where you can enable to see all SQL statements executed.

Perhaps something like this:
https://stackoverflow.com/questions/8208310/postgresql-how-to-see-which-queries-have-run

The idea is, if you are able to see the bottlenecked rendering, that we are
able to capture all the SQL statements that were executed, and then be able
to run such queries in isolation and if we can verify certain queries being
slow, to then use pgAdmin to EXPLAIN these queries.

From the EXPLAIN'ed execution plans you can then see if it is indeed hitting
your spatial and secondary indexes or if it is doing something terrible like
table scans.

For the record, I'm not a PostgreSQL expert. My main day-to-day RDBMS is
Microsoft SQL Server, but any RDBMS worth their salt will have built-in
facilities to log/capture SQL statement execution and produce execution
plans on any query.

- Jackie



--
Sent from: http://osgeo-org.1560.x6.nabble.com/MapGuide-Users-f4182607.html
Loading...