Enhanced efficiency for ONEOS through manual and automated testing

by Asmita 

In February 2021, Dilshan Modaragamage approached us with a unique business challenge. Dilshan is an entrepreneur from Canada and CEO of Wearatec Inc.

Wearatec is a company that makes it easy for brands to launch smart payment-enabled devices. This was made possible with the help of the ONEOS application. ONEOS is an activity-tracking fitness as well as a payment app. 

The Wearatec brand’s wearable (BLE) devices gather users’ health data via heart rate, HRV, and SpO2. Additionally, the device also gathers motion sensor data such as steps, sleep, activity, and gestures. Wearable devices such as rings or alpha tags are used for payment services. All of this data is transmitted into ONEOS, where the application’s algorithms evaluate and analyze it, providing users with health insights and easy payment methods.

The idea of Wearatec is unique in the sense that no existing fitness apps had payment services enabled. The project brought to us involved designing and developing mobile applications for both iOS and Android platforms.

Business Challenge

Our engineers built ONEOS for both Android and iOS platforms. Our designers had undergone a rigorous design process to develop a UI for ONEOS.

The challenge presented to the QA team was to make sure that the application built for both platforms was consistent in UI and functionality. 

We had to go through various test cases and scenarios manually which was a bit time-consuming. Because we were testing manually on both iOS and Android devices, we were not able to test every possible scenario or combination due to time and resource constraints.

For this project, the API for payment integration was provided by a third party (Cascade and Digiseq). So, we had to be in sync with their team. Managing time and workflow within both parties was another challenge presented to us.

How we solved it

During the project, we set up a team of 2 QA engineers to oversee the testing process of Wearatec. 

The QA team started tallying the mock with the developed build and started reporting bugs. We performed manual testing like smoke testing in each updated build of the app. After the new features were added and pushed to the next build, sanity, and regression testing were performed.

Communication and collaboration within the team was the key component in resolving the challenges. Firstly the QA team performed design QA to ensure the design outputs such as user interfaces[UI], user experience[UX]  meet the specified quality standards. Then synchronously started testing every aspect of the developed build with that of mocks.

We used both manual and automated testing

We mostly used manual testing to test all the features from start to finish, in both Android and iOS. We believe that UI/UX testing is best done manually, so we tested the convenience and user-friendliness of our apps from the user’s perspective.

We explored the application flow and tried finding the bugs. Once the bugs were fixed from the developer’s end, we again retested the feature to ensure the quality.

Sometimes, a developed feature works well in Android, but not in iOS (or vice versa). For that, we did exploratory testing to find out the defects, reported the bugs, and kept track of the bugs detected in multiple environments in any of the mobile platforms.

Apart from manual testing, we performed load testing on the application to find out how the application behaves when the number of users was increased up to a certain range. For this, we required a large number of users in the application at the same time which manually was not possible. So, we used a load testing tool called Apache Jmeter and observed the application behavior under the expected load. 

For the web admin portal of ONEOS, we carried out both manual as well as automation testing. As per the manual, we did the exploratory testing first. We then focused on UI/UX testing and other functionality testing. We tested the frontend by tallying the developed web application with the mock. For the automation part, we automated the web application using Cypress as it provides an easy way to set up, write and debug tests. We performed end-to-end testing for the admin portal in the cypress. For the backend, we performed API testing using POSTMAN.

Applied Technology

To enhance the efficiency and effectiveness of manual testing we used the Cypress automation tool for end-to-end testing of the admin portal of Wearatec to examine the entire app from start to finish. Jmeter is used for load and stress testing to identify the system breaking point by pushing it beyond normal operating conditions. 

Similarly, we used Apptim mobile testing tool to test the performance and functionality of mobile apps which provided us with tools for testing and optimizing mobile apps including testing for user experience, app speed, and battery drain. We performed API testing to debug the API responses.


  •  The stress test results indicate that the system performs well up to 200 users for login and customers with a ramp-up time of 1 second. 
  • Beyond that, the response time and error rate starts to degrade significantly. The average CPU usage for 1000 concurrent users was around 60%.
  • The developed features meet the defined requirements of Wearatec.
  • The mobile app performance testing indicates ‘Application CPU’ usage has an average value of 3.35% and a maximum value of 12.5% which are under acceptable limits.
  • As the report of mobile app performance testing indicates “RAM Memory PSS” usage has an average value of 135.3MB, which is below the maximum of 148.27MB. Therefore, the application’s memory usage is within acceptable limits

Final Thoughts

Gurzu is a software development company passionate about building software that solves real-life problems. Explore some of our awesome projects in our success stories.

Need help with automating your software tests? Gurzu engineers can help you! Drop us a message.

Have a tech idea that you need help turning into reality? Book a free consulting session with us!