Performing Monetary Calculations with Kotlin
Monetary calculations must be thoroughly tested because for your end-users, the values directly translate to monetary outcomes.
During the first and second COVID waves, there were several myBillBook users, who were dealing with oxygen cylinders. Incorrect calculations would not only affect them, but the hospitals, patients and government entities they were dealing with.
Here are some key pointers to ensure your monetary calculations are built for precision and scale. While these pointers are platform agnostic, we’d be using Android and Kotlin to explore the issues in some depth and identify workarounds.
Selecting the correct data type
Floating point arithmetic is imprecise because it leverages formulaic representation of real numbers as an approximation to support a trade-off between range and precision.
This is also true when we use theDouble
data type.
Since these values are not accurately stored in memory, the differences can often compound in your overall calculations.
BigDecimal on the other hand, represents a signed decimal number of arbitrary precision with an associated scale. Virtually, it’s possible to calculate the value of pi
to 2 billion decimal places using BigDecimal, with available physical memory being the only limit [ref].
Therefore, in cases where precision is of paramount importance, we must use the BigDecimal
data type.
Running calculations on a background thread
Have you ever encountered the following message:
Skipped 96 frames! The application may be doing too much work on its main thread.
Arithmetic operations are CPU intensive and floating point arithmetic even more so, because the hardware has to deal with both the mantissa and exponent parts. This leads to the your application becoming laggy / unresponsive and if you’re doing too many calculations, can attribute towards ANRs.
Therefore, we must perform all calculations on a background thread and publish the results on the main thread upon completion.
While this is now simple to achieve in Android using co-routines, we must ensure that these calculations are lifecycle aware. Consider a scenario where you’re performing some heavy calculations in the background and the user minimises the app. If your code is not lifecycle aware, once the calculations are complete and your application logic tries to update the UI, it would lead to crashes since your view has already been destroyed.
Handling precision
While all calculations must be performed with actual values, while interfacing with the end-user, its imperative to round the values to a pre-defined precision. This precision should be identified based on your particular use case.
Without any precision, your users will see values such as 100.000000123
in your user interface which looks unprofessional and is of little use for them.
On the contrary, with a smaller precision, data may not be clearly communicated. For example, at myBillBook, we have businesses that deal in jewellery, where a common transaction may constitute items in either grams
or kilograms
. Therefore, to represent a value of 123 grams
in kilograms as 0.123 kilograms
, we require a precision of atleast 3 decimal points.
Supporting large values
Do your end-users perform transactions in crores? If yes, then for a value of 10000000.0
, they’d be seeing 1.0E7
on the UI. A way to get around this problem is to convert your values .toPlainString()
before rendering them on to the UI.
Validate everything at the client
Normally, your server side logic will include validations. However, it is recommended to also validate everything at the client end because of the following reasons:
- Reduces latency in validation caused by API response time. Client side validations make your application faster and lead to an improved user experience.
- In case some validation is missed on the server side, your client will not push incorrect data. This can especially happen when your values can be
null
, empty orNaN
on the client end. - Supports offline first approach.
Add a layer of unit test cases
The problem with manual testing is:
- Prone to human errors.
- Test suite may miss certain edge case values that can break calculations.
- Time consuming to re-test calculations with every release.
Writing a layer of unit tests around your calculations circumvents all of the problems above.