Benchmark for CatchChallenger cache

I have finish implement cache into HPS to reduce memory at server startup and startup time for server and client (usefull for phone or where the CPU is slow)

EDIT: How do the cache sync with datapack:

  • Scan at startup the datapack, create checksum, if checksum match the cache then load the cache. Problem: it’s slow mostly on slow disk and FS because need access to all inode, can greatly slow down the cache
  • Never check the datapack change, regen the cache manually. More performance, but not adapted to everyone. Need be intergrate where you update your datapack
CatchChallenger binary datapack size (datpack file cache * is not used if http mirror enabled), map can be stored into quadtree to improve the sapce used at exchange of more cache miss
The datapack load time is better, but can be improved even more
Memory is few bit better, but can be optimized

QGraphicsView performance in 2019

Hi, to dev CatchChallenger I had a problem in QWidget: I had low performance on android, 9FPS

My mix QGraphicsView + QWidget is not supported if I enable OpenGL on android and on other platform it do bug.

Then I have tested lot of stuff, QGraphicsView + OpenGL + native widget for 2D game have correct performance: 60FPS on all platform with <6% of CPU on android (and cortex A53). Then Qt via QGraphicsView seam 100% ready for game in 2019.

For webassembly and android: no windows manager, then I need remake all in one windows.


6H to 5min for CatchChallenger compilation time

How I had pass from 8H to compile my catchchallenger cluster nodes to 5min?

  • Firstly be sure the only minimal header is included into your sources files
  • I use same OS on all my nodes on same architecture, compile on one node by devices, not on all nodes (lower the concurrency) and copy the binary. Very lower memory pressure, not use swap

With this I have passed from -j1 compilation to -j9, then it’s very more powerfull too. And finally my time to compile is very low.


After C10k (10 000 concurrency connections), C10M (10 000 000 concurrency connections), now C10B is near (10 Billions concurrent connections, 10Tbps, 10 billions packets per seconds, 1 billion connections/seconds). In my case: 1 Billion concurrent connections, 1Tbps, 1 billions packets per seconds, 1 millions connections/seconds)

I have test it with CatchChallenger, with 32 core Threadripper 2990WX , 256GB DDR4, Radeon RX Vega 64.

This experiment can be used for high speed network packet processing for ethernet 100G+ or infinityband device. I have do as R&D for my company router with 1Tbps (at 64B packet size) routing capability certified with benchmark.

I have do stateless filter on IPv6 input, each processing unit after do the state-full work, but with an special dispatching memory access to reduce the cache miss. Mean: IP/TCP processing GPU, CatchChallenger processing CPU.

Difficulty: Very limited into memory, I had used specialized swap technique with barer to delay some class of trafic (move on map: 95% of move, + not published protocol with another vector move to prevent memory access and only parse last pos with need to use previous data), bulk processing of this.

59ms average reply time
95% 342ms reply time
Burn 1.2Tbps of network bandwidth



http3 in Confiared


I have already finish the support of http3 for Confiared, but not pushed in prod because I need check more it.

The next week I will work on it, for Americ South (as Bolivia), one of interesting part is the http3 is more RTT insensitive. Mean: for https, it wait very less time before start to download the web page.

It will be firstly enable on IPv4 reverse proxy for our VPS and hosting, to get it on your servers.


Mimic UDP with TCP


My problem was: I have a game (CatchChallenger), I need update the player position on other player, but if newer data is here, not send the old data. Not send all vector change (pos + direction). But my game is in TCP.

Put the socket buffer to small size, if you do it into:

  • Async: when EEGAIN receive, store the last pos into buffer with method: last data override the content, when you don’t have EEGAIN you can start send the buffer
  • Sync: You need use thread, it’s same method as above, but you don’t use EEGAIN, just: bool eegain=true;write(data);eegain=false;

As this you have: packet in writing (then can’t change), X dropped data due to wait of writing buffer free, and the last pos send. Exactly as UDP, but with the TCP advantage as packet order (and the small tcp packet overhead)

If you like this tips, follow me on facebook and linkedin.


Fuzzing continue


Je suis en train d’implémenté du fuzzing continue, wikipedia a une bonne définition:

Le fuzzing (ou test à données aléatoires) est une technique pour tester des logiciels. L’idée est d’injecter des données aléatoires dans les entrées d’un programme. Si le programme échoue (par exemple en plantant ou en générant une erreur), alors il y a des défauts à corriger.

J’ai une infrastructure de teste, je lance de forme continue via les bots que j’ai mentionné dans l’article précédent. Cela permet de détecté des bugs avant que les joueurs ne s’en rendent compte.

J’ai couplé cela à des techniques de pointe comme le sanitizer software utilisé par google dans chrome (cf: OSS-Fuzz), qui permet de détecter les buffer overflow et autre bug/crash/faille de sécurité. Ce qui m’as permit de détecté encore plus de bugs et une fois corrigé, cela me permettra de fournir une meilleur stabilité.