Over the past few months, we have been working on a major internal changeover. When we started programming RUNALYZE several years ago, it was intended for a single user only – only afterwards did we make multi-user operation possible. It now runs very smoothly, but the data storage was not optimized for the large amounts of data we have to deal with now and in the future. It makes a big difference whether you have to store the activity data of a single user or that of over 30,000 users.
Finding the right solution
To store these data sets permanently in a relational database is suboptimal. Currently, we are talking about around seven million activities, which are equipped with GPS data and all other sensor data such as heart rate, etc. In the long term, it was imperative to find another solution, which has since been found – taking into account many aspects such as implementation effort and running costs.
The crux of the matter: about 80% of the activities are uploaded once, often only called once and then almost never needed again. But only almost never again. Recalculations for future features, the poster tool or the backup of all your data stored with us must of course be able to access the data.
One advantage of the new, file-based solution: New data fields can be added and evaluated much faster. In this respect, several fields have been added at once, which can be read from the sensor data in the future. These include Cycling Dynamics, which are recorded on Garmin Vector pedals (left-right balance, torque effectiveness, pedal smoothness, platform center offset and power phase metrics), as well as other Stryd data (Form Power and Leg Spring Stiffness) and Garmin’s continuous performance condition. More details on the new values will follow in a separate blog article.
A minor drawback of the new solution is a slight loss in performance. For example, the creation of the posters takes a little longer than before. But there is still room for improvement, which we will dedicate to in the future.
Migration, i.e. the act of conversion itself, was an exciting topic. In the beginning, we planned to have to accept a downtime of 24 hours in order to make the changes. Initial tests showed that 24 hours would not be enough. Fortunately, we were able to carry out most of the data processing parallel to the live operation. For about two to three weeks, two servers were running under full load to bring all your data into the new format. At the end of the day, there was only a very small amount of time left for the conversion.