top of page

Latency vs Throughput


Context: Your software projects, whether personal or for a startup, need to be evaluated both for the end user needs and economic viability. The way to evaluate is to do experimentation with features while they are being developed. Why? - because the single biggest factor that contributes to waste for business in a software project is engineers building something that has no value. Faster experimentation to understand what works (and what is valuable to end users) and what does not is crucial for survival. The more experiments you could perform per day and per dollar, the more are your chances of survival and hitting something that is really needed.


So how should you balance between faster experimentation and building a design that scales? Focusing on reducing the experimentation latency makes the design more messy, in turn making it very difficult to scale. On the other hand, premature optimization in software engineering makes the product robust but it may never take off as experimentation latency suffers in favor of throughput.



Considering a case of modular vs connected architecture:


It's very fast (read cheap) to use a fully connected architecture to add the first few features as all the resources are always available and you don’t have to spend much effort in abstraction. However, since everything is connected to every other thing, adding new features becomes very painful and you have to spend an increasing amount of effort in testing, making the system fault tolerant and ensuring delivery, favoring the latency and sacrificing the throughput, hoping that the exponentially increasing complexity and efforts spent bring in exponentially increasing revenue (does it?).


In a modular design, the connections between modules are kept at a bare minimum, so changes in one module are much less likely to interact with the changes in other modules. This helps in scaling very well as the cost of delivering new features remains fairly constant, but the cost of the first feature is high favoring the throughput at the expense of experimentation latency.



What should you favor and at what cost?


The strategy I use is to focus on experimentation in the initial phase where a lot of team efforts are spent on ensuring the features developed are tested well. As we climb on the usage, as in when we are hitting the targets and are sure that we need to focus on scaling a specific offering, we look into transitioning the codebase to a more modular approach which involves a lot of designing and refactoring. Sometimes the cleanup is more local and not widespread, but the essence remains the same.

This transition is not just the transition of design of codebases, but it is also a transition of how we think about the priorities of engineers in the team.



Comments


bottom of page