Hi, to improve the service into Confiared I’m rewritting the CDN software.
We was with Nginx + Nginx FastCGI cache + PHP (to just proxy the reply). This solution lack fine cache control tuning, some bug due to Nginx cache.
Then I have rewritten the CDN as standalone FastCGI server, the cache is directly controlled by the server. If same url is already downloading, then the content is send from the partial downloaded content. I choose monothread to have great performance without thread coherency code (more simple to dev, more eficience if the code is very fast because if the code is very fast most of the time is consumed into thread management and data migration from a CPU to another CPU).
The code is specific, not flexible and generalist. I parse the protocols (DNS, FastCGI, …) on fly. That’s greatly improve the performance, reduce the memory. An internal page is served 3x more faster than a simple « Hello world » into PHP 7.4.
The future improvement are: better cache, cache some stuff where needed (DNS, …), use io_uring to improve file access and be 3x more fast than Nginx with static file, do profile to optimize the code. (And maybe do my own http server)
What is the problem with other Architecture for Linux and developer?
When you have code abstraction (as Python, Java, C#), porting is transparent then easy to support another architecture. But lot of software in the system are in C/C++, with old code, that’s mean: Each time where you have system specific or architecture specific code, you need adapt to new system/architecture. For 32Bits, in C/C++, some dirty code cast pointer to integer, that’s generate problem on 32/64 barrier, added to less popular each years then less maintained by the owner of each project, …
64Bits support is too hard than do generic support with clean code.
About obsolescence, the ARM quit 32Bits into the last 10 years. I have lot of running hardware in 32Bits. It’s fully functional, very good for the assigned task.
In this context: Every body try hard to just save few % of resources for the earth. We don’t have control on big company then forgot support the last android on 5 years phone. But generic architecture support, it easy (yes not optimized, and?) and prevent:
Manufacturing other hardware (less wast, less wars for rare resources)
Manufacturing hardware with compatibility, then more silicon, less performance
Shipping (resources and problem)
Take lot of time for person and company to buy other hardware, change it, configure it, fix for the new hardware
At performance, 32Bits on x86 it’s more slow but for the software with large pointer usage, it use less memory. And the performance is not the target to everyone.
I have finish implement cache into HPS to reduce memory at server startup and startup time for server and client (usefull for phone or where the CPU is slow)
EDIT: How do the cache sync with datapack:
Scan at startup the datapack, create checksum, if checksum match the cache then load the cache. Problem: it’s slow mostly on slow disk and FS because need access to all inode, can greatly slow down the cache
Never check the datapack change, regen the cache manually. More performance, but not adapted to everyone. Need be intergrate where you update your datapack
Hi, to dev CatchChallenger I had a problem in QWidget: I had low performance on android, 9FPS
My mix QGraphicsView + QWidget is not supported if I enable OpenGL on android and on other platform it do bug.
Then I have tested lot of stuff, QGraphicsView + OpenGL + native widget for 2D game have correct performance: 60FPS on all platform with <6% of CPU on android (and cortex A53). Then Qt via QGraphicsView seam 100% ready for game in 2019.
For webassembly and android: no windows manager, then I need remake all in one windows.
After C10k (10 000 concurrency connections), C10M (10 000 000 concurrency connections), now C10B is near (10 Billions concurrent connections, 10Tbps, 10 billions packets per seconds, 1 billion connections/seconds). In my case: 1 Billion concurrent connections, 1Tbps, 1 billions packets per seconds, 1 millions connections/seconds)
I have test it with CatchChallenger, with 32 core Threadripper 2990WX , 256GB DDR4, Radeon RX Vega 64.
This experiment can be used for high speed network packet processing for ethernet 100G+ or infinityband device. I have do as R&D for my company router with 1Tbps (at 64B packet size) routing capability certified with benchmark.
I have do stateless filter on IPv6 input, each processing unit after do the state-full work, but with an special dispatching memory access to reduce the cache miss. Mean: IP/TCP processing GPU, CatchChallenger processing CPU.
Difficulty: Very limited into memory, I had used specialized swap technique with barer to delay some class of trafic (move on map: 95% of move, + not published protocol with another vector move to prevent memory access and only parse last pos with need to use previous data), bulk processing of this.
59ms average reply time
95% 342ms reply time
Burn 1.2Tbps of network bandwidth
I have already finish the support of http3 for Confiared, but not pushed in prod because I need check more it.
The next week I will work on it, for Americ South (as Bolivia), one of interesting part is the http3 is more RTT insensitive. Mean: for https, it wait very less time before start to download the web page.
It will be firstly enable on IPv4 reverse proxy for our VPS and hosting, to get it on your servers.
My problem was: I have a game (CatchChallenger), I need update the player position on other player, but if newer data is here, not send the old data. Not send all vector change (pos + direction). But my game is in TCP.
Put the socket buffer to small size, if you do it into:
Async: when EEGAIN receive, store the last pos into buffer with method: last data override the content, when you don’t have EEGAIN you can start send the buffer
Sync: You need use thread, it’s same method as above, but you don’t use EEGAIN, just: bool eegain=true;write(data);eegain=false;
As this you have: packet in writing (then can’t change), X dropped data due to wait of writing buffer free, and the last pos send. Exactly as UDP, but with the TCP advantage as packet order (and the small tcp packet overhead)