Mojaloop is increasingly being relied upon as a national and regional payment infrastructure. In that role, trust depends not only on performance but also on consistent availability, operational resilience, and the ability to operate reliably over sustained periods.
Scheme operators and central banks, therefore, need clear evidence that the switch can handle real-world transaction volumes while maintaining the core ISO 27001 principles of availability, integrity, and confidentiality, alongside the necessary security and operational safeguards.
This is why performance work is so significant: it forms one of the key pillars that support confidence in the platform’s readiness for real-world operation.
This series shares practical lessons from INFITX’s performance engineering work across Mojaloop v17, in collaboration with the Mojaloop Foundation and the broader community. Each post is short and focused: one performance theme, why it matters in financial systems, and what we did in Mojaloop v17 to move it forward.
Why performance is a trust requirement
In a production payment switch, performance is not a vanity metric. It underpins trustworthy operation, especially when you consider availability, integrity, and confidentiality as non-negotiable constraints.
Practically, it is the combination of:
- Sustained throughput under realistic traffic (not just a short burst)
- Predictable latency (especially tail latency) across the end-to-end flow
- Operational stability (repeatable behaviour between runs and deployments)
- Security enabled so the result reflects real conditions
When these are in place, operators can scale with confidence and focus on service outcomes, not firefighting.
An Overview of the contributions INFITX made
INFITX’s contribution to Mojaloop v17 is best understood as two linked outcomes:
- Core service optimisations: Cross-cutting improvements across switch components to increase throughput and predictability end-to-end, with security enabled.
- A reproducible starting point: A baseline Helm deployment profile (including starting replica configuration guidance) plus the tooling needed to make performance work repeatable: k6 test scripts, DFSP simulators that can sustain load, and observability dashboards (metrics and traces).
A collaboration and capability story (not just technology)

Performance isn’t “just code.” It is also a workflow: measure under load, make bottlenecks visible, fix the right things first, and capture the operational patterns that make results repeatable.
That is why our work is framed as both an engineering contribution and capacity building. This approach helps adopters:
- run repeatable test cycles,
- interpret results consistently,
- and evolve toward the right performance and operational model for each jurisdiction.
What this series covers
Over the next five posts, we’ll cover:
- Batching, caching, and reducing avoidable “chatty” work
- Removing bottlenecks in the ingress/front-door and eventing path
- Kafka partitioning and handler concurrency for real throughput
- Replica configuration, scheduling, and performance repeatability in Kubernetes
- Observability and profiling as first-class performance tools
What’s next
Part 2 starts with a foundational throughput lever: batching and reducing avoidable work in hot paths, plus where caching helps (and where it can hurt).
