Today, there are an excess of apps available for smartphones covering almost any use case one can think of. Network operators and service providers want to ensure that their mobile connection offers everyone a smooth usage experience, especially for the most popular smartphone apps. However, the full performance of such an application service is not under the mobile network operator’s control. The entire chain, and not just by the performance of the airlink, determines the quality of experience (QoE).
So, how can mobile operators optimize networks for apps? To do this, operators need tests targeting mobile data applications.
KPIs of mobile data application tests
Classical network tests are usually based on very technical parameters such as HTTP throughput, ping response time, and UDP packet loss rate. Trigger points for measuring durations of certain processes are often extracted from the RF trace information of the phone or the IP stream because the structure of the test is known, and the protocol information remains unsecured.
When a test is created for a specific service offered by a smartphone app, the possibilities are more limited. There are many differences compared to classical data tests:
- The IP traffic between client device and service is usually encrypted, and the payload cannot be analyzed.
- The app’s data handling is hidden. Both the client or the server might perform data compression at some point; this takes time and might be part of the test duration even though nothing network-relevant is happening.
- The server hosting the service belongs to a third party and cannot be controlled by the network operator or the company conducting the measurement campaign. In addition, the service may behave differently from test to test, or over time, because of dynamic adaption and changes in the app and server setups.
- Apps are consumer software with certain instabilities and are subject to frequent updates. Often, the user is forced to update to the newest version to keep the service working.
- The services are subject to change, and even with the same app version the network interaction may change suddenly due to server influences.
It is, therefore, not possible to create a technical mobile data application test to analyze those services under reproducible conditions. It is often not even possible to see intermediate trigger points in IP or RF. Only information on the user interface is available to evaluate those services.
Throughputs, transfer and response times, and other technical measures cannot be obtained with confidence and are influenced by the server and client internal setups. They, therefore, do not reflect the network performance and provide almost no valuable information for network characterization.
Recommended
Reliable network measurements enable QoE improvements
What drives user perception?
When measuring and scoring the user’s perception and satisfaction with these services, technical parameters are not a valid indicator. If technical parameters are not applicable to measure QoE, we should ask: What drives user perception and satisfaction?
The most reliable way to measure a subscriber’s perceived QoE is by directly watching the user interface. This is the only information the user has. Consequently, the user experience can only be determined by visible information to the user. All sorts of feedback are provided, and the main criteria are, of course, the success of the task and the time to finish the task, e.g., to download something or to receive confirmation that a message has been delivered:
- Can the desired action or set of actions be successfully completed?
- Is the overall duration short – or better – short enough based on the user’s experience?
The Fig. 1 below shows the timeline of a mobile data application test. The user starts with a request (e.g., opens the home screen of Facebook) and finishes his interaction with the service after 1 to n individual actions (like posting a picture and commenting on another post). The user experience will be good if all actions can be successfully completed, and the overall duration is short.
It should be noted that the time to finish a task is still a technical parameter. It should not be linearly translated into a QoE score such as a mean opinion score.
The dependency between the technical term ‘time’ and the perception of waiting for the expected fulfillment is not linear; there are saturations at both ends of the scales. A shorter time will not improve the perceived QoE if the duration is already very short, and a bad QoE may not get worse if the duration increases.
The translation of task duration (time) into QoE is not a simple task. It depends on the type of service and the (increasing) expectation of the subscribers and the feedback culture of the service, e.g., intermediate feedback or other diversions, maybe even smartly placed advertisements.
There is a high degree of underlying complexity in app tests, and a large number of factors other than the network quality that influence the results, i.e., including server performance, phases without data transfer, additional communications protocols, and more. This limits the areas where testing apps can be used to test third-party services, and it is important not to misuse them for technical measurements such as optimization tasks. Consequently, it is better to characterize the network’s performance using clearly indicated technical tests such as self-hosted HTTP transfer and to restrict app testing to measuring user QoE.
Mobile network testing with app tests
Network operators want to optimize their network for real use-case scenarios with mobile applications, for example, when trying to achieve a good benchmarking result. However, as noted previously, the performance of such an application is not under full control of the mobile network operator. The entire chain, and not just the performance of the airlink, determines the QoE.
The chain starts with the third-party server’s performance, and actions taken by companies such as YouTube or Facebook and how they are linked to the internet. This is not under the control of the operator. Of course, the connection of the operator’s core network to the internet may also influence the final performance. The operator can solve such issues, but they are often not easily visible when focusing only on the RF parameters. Here, a wider analysis that includes higher layers is required.
Finally, the app itself has a large influence on the overall performance. The app is usually in close communications with the server at the other end and adjusts to momentary channel states through feedback loops.
We have also to consider that even a simple service such as video streaming or opening a website initiates much more than just one link to the content server. There are many connections and individual parallel activities to provide advertisements, wrapping information, and reporting back user settings and preferences.
By far not all of these background activities are visible to nor wanted by the user; however, they are part of the service and consume both data capacity and time. Mobile network optimization, on the other hand, needs technical tests with detailed results that reveal the technical parameters where improvements can be made.
Consequently, there is no single test that can be used to optimize the network and guarantee the best QoE for the target app test. Instead, an iterative procedure should be followed to achieve the best results:
- Use the target app test to check the QoE from a real user perspective. This test includes the entire chain that determines the QoE for the users. Here, it is very important to mimic the real use case as closely as possible by using typical file sizes, types, and so on.
- If the QoE is not satisfactory, determine which technical parameter in the network could be optimized. This might require additional technical testing. Weaknesses on the client or server side cannot be addressed directly, but sometimes a change in the network can reduce their impact.
- After optimization, repeat the app test and compare the results with step 1. If the QoE does not improve, further optimization of the technical parameter in question might not be helpful and you may have reached the optimization limit. The network is ‘good enough’ in this respect.
- Go back to step 2 and determine more optimization points until the app test QoE is satisfactory.
As an example, a mobile network operator could have been rated slow for uploads to Facebook in a benchmarking campaign. Using classical HTTP transfer tests, he finds out that his average HTTP transfer throughput is slower compared with competitors. But after another Facebook test in a cell optimized for high throughputs, it becomes clear that the upload duration of a file to Facebook is only marginally faster than before. The average available HTTP transfer throughput was already good enough.
A closer look at the results of the Facebook test reveals that the most important factor for the upload speed is not the throughput. Instead, how fast a third-party server can be accessed from the the network is crucial and if a preference for the high performance Facebook servers is available or not. After making improvements in third-party server accessibility, the app test QoE is finally satisfactory.
When the general strategy is clear that both app tests and technical tests are needed to yield the best QoE in real use-case scenarios, the challenge remains to find out which technical parameters are promising optimization candidates that impact the QoE. Many parameters besides average and peak throughput may play an important role:
- Round-trip times to third-party servers
- Performance of preferred third-party servers
- Throughput ramp-up duration
- Throughput continuity
- Bottlenecks in the core
- Inter-radio access technology handover
Conclusion
Today’s smartphone users have high expectations on mobile networks and their applications. They want things to work and they want them to work fast, which leads to the two fundamental key performance indicators (KPIs) expressing the QoE for data transfer (and messaging) app tests. The complexity of the underlying transfer and processing chain is substantially higher compared to classical data transfer test scenarios. This requires network operators to apply a more complex optimization strategy, testing the entire system in very realistic use case scenarios and taking the large number of influencing factors into account.
The process of determining the technical parameters that can really lead to a higher QoE needs to be iterative and involve real app testing as well as technical tests that deliver more detailed results. Ideally, the QoE of important services should also be monitored since the frequent changes in the functioning of third-party services can lead to sudden performance drops that might even lead to new requirements that impact QoE.
Learn more about Rohde & Schwarz