PAGI::Server Performance and Hardening

Some raw info on how PAGI::Server, the reference implementation of the PAGI spec (https://github.com/jjn1056/pagi) is coming along performance wise. Here’s some high concurrency testing on a basic ‘Hello world’ app: https://github.com/jjn1056/pagi/blob/main/examples/01-hello-http/app.pl

This testing is running off my MacBook Pro, Intel era (2.4 GHz 8-Core Intel Core i9) which is not particularly known for being a great server. Running as:

 LIBEV_FLAGS=8 ./bin/pagi-server --workers 16 --quiet --no-access-log --loop EV  ./examples/01-hello-http/app.pl 

Results:

% hey -z 30s -c 500 http://localhost:5000/

Summary:
  Total:    30.0217 secs
  Slowest:  0.1110 secs
  Fastest:  0.0097 secs
  Average:  0.0312 secs
  Requests/sec: 16010.2544


Response time histogram:
  0.010 [1] |
  0.020 [376]   |
  0.030 [222649]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.040 [226482]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.050 [26058] |■■■■■
  0.060 [4279]  |■
  0.070 [492]   |
  0.081 [123]   |
  0.091 [173]   |
  0.101 [20]    |
  0.111 [2] |


Latency distribution:
  10% in 0.0249 secs
  25% in 0.0273 secs
  50% in 0.0304 secs
  75% in 0.0340 secs
  90% in 0.0381 secs
  95% in 0.0414 secs
  99% in 0.0505 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0000 secs, 0.0097 secs, 0.1110 secs
  DNS-lookup:   0.0000 secs, 0.0000 secs, 0.0193 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0091 secs
  resp wait:    0.0311 secs, 0.0096 secs, 0.1110 secs
  resp read:    0.0000 secs, 0.0000 secs, 0.0035 secs

Status code distribution:
  [200] 480655 responses

This beats a similar PSGI hello world running under Starman by 30%, but the important bit to note is that PAGI::Server successfully responded to all requests, whereas Starman fell over by 80% at this load on my machine. That’s why you need to run Starman behind an edge server like Nginx; it just can’t take the high concurrency.

I’ve also been doing http/websockets compliance and security testing on PAGI::Server, working draft is here:

https://github.com/jjn1056/pagi/blob/main/lib/PAGI/Server/Compliance.pod

volunteers who know a lot about flogging servers very welcomed to help me test this. In production you are still likely to run behind a proxy or edge server like Ngnix but the more robust it is stand alone the better.

Similar Posts