The design and implementation of an android studio plugin to support task analysis annotation for automated usability evaluation

Durán Saez, Rafael (2017). The design and implementation of an android studio plugin to support task analysis annotation for automated usability evaluation. Thesis (Master thesis), E.T.S. de Ingenieros Informáticos (UPM).


Title: The design and implementation of an android studio plugin to support task analysis annotation for automated usability evaluation
  • Durán Saez, Rafael
  • Ferré Grau, Xavier
  • Qin, Liu
Item Type: Thesis (Master thesis)
Masters title: Ingeniería del Software
Date: December 2017
Freetext Keywords: Task analysis; Automated usability evaluation; Android Studio; Plugin; Usability
Faculty: E.T.S. de Ingenieros Informáticos (UPM)
Department: Lenguajes y Sistemas Informáticos e Ingeniería del Software
Creative Commons Licenses: Recognition - No derivative works - Non commercial

Full text

PDF - Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (3MB) | Preview


This master thesis proposal aims to contribute to the overall research objective of how to automatically or semi-automatically analyze the usability of mobile applications in order to inform the decision process of what to pursue in each iteration of an iterative development process, in an integrated manner with IDE tools typically used to develop this kind of applications. And, more specifically, the thesis work will focus on proving the feasibility of extending an IDE with usability annotation features in Android mobile applications, so that usage data is collected for further analysis. The purpose and significance of the study are as follows: - Identification and definition of a model of tasks that users are expected to carry out with the app. - Instrumentation in the app code of data gathering functionality for the modeled tasks. - Elaboration of an automated usability evaluation method based on the UCD approach from the HCI field. These aim points will conclude in a more efficient and cost-effective way to apply automated usability evaluations in order to discover usability problems from user usage, to be addressed in future versions. Usability is a key quality attribute for mobile applications, since the market is highly competitive. The current usability evaluation techniques from the Human-Computer Interaction (HCI) field are really costly, in terms of budget and time, and hard to implement because of the wide range of different contexts of use of mobile apps that nowadays exists, and others that are ready to come. They also imply a high cost of resources for full-scale usability testing. Mobile app development demands to spend fewer resources on his operations. Automated usability evaluation techniques are an option for managing usability in this demanding environment, automating the three main phases for all usability evaluation: capture, analysis and critique of usage data.Traditional usability testing takes a lot of resources and it can be supplemented with automatic solutions. Automated usability evaluation under real-life conditions would be focused only on data taken from user’s interaction with the app to be analyzed. In the software engineering world, usability is a quality attribute that can be defined as the degree to which a software product can be used with effectiveness, efficiency and satisfaction by users with different characteristics. Usability can be decomposed in the following characteristics: learnability, which regards to how easily the user is able to learn how to use one software object the first time the user uses it; efficiency, how fast the users are able to accomplish one specific task; memorability, how easy is for users to efficiently use a software object after a considerable period of time without using it; errors, the amount of errors a user makes when using a software object and also the severity of them; and satisfaction, how pleased and glad the user feels after using the software object. Another related and relevant concept is the user experience (UX), which reflects the user feelings and sensations after using a product or service. Usability has been a fundamental concept for Interaction Design research and practice, since the dawn of Human-Computer Interaction (HCI) as an inter-disciplinary endeavor. For some, it was and remains HCI’s core concept. HCI is the discipline that aims to develop interactive systems with a high level of usability. Usability evaluation is a cornerstone of the User-Centered Design (UCD), which is a development approach to achieve a good level of usability in software systems. In particular, according to ISO 9241-210, feedback from users during operational use identifies long-term issues and provides input for future design. Usability testing by a set of representative users is the most relevant technique for usability evaluation. Assessing usability under real-user conditions is one of the hardest challenges the experts in usability evaluation for mobile apps are facing with, due to the changing context of use Usability evaluation methods could be the more important methods between the diverse HCI techniques that nowadays exist for attaining usable products. These ones aim to gather data from user actions that can show how users are actually using the system and which problems they face, and comparing the real usage patterns with the expected and optimal ones, to identify usability troubles and be able to solve them for feature releases. Usability evaluation processes imply different activities depending on the methods employed and the features of the procedures, but three main activities are common for all of them: capture, regarding the collection of usage data from users’ activity; analysis, the interpretation of usage data in order to identify usability problems;and critique, the possible ways to solve and improve the problems found and finally remove them. The challenge of developing more usable applications has led to the emergence of a variety of methods, techniques, and tools with which to address usability issues. In 2001 Ivory and Hearst studied the state of the art in automated methods for evaluating usability in their influential paper "The state of the art in automating usability evaluation of user interfaces", classifying automated usability evaluation techniques into a taxonomy. From 2001 to present interactive systems have evolved greatly, and with them the potential to automate usability evaluation, for example, the ability to collect and analyze large amounts of data. The continuous delivery paradigm, which is nowadays a feature of mobile app development, naturally fits in with the aim of evaluating usability continuously in order to inform design decisions that are taken to refine design in subsequent deliveries. User interaction in mobile apps involves many more aspects than just navigation issues, since every mobile platform offers a different UX. A huge amount of resources is consumed when users are selected to carry out field studies. Automated usability evaluation is a potentially good alternative, offering the possibility of gathering information from actual app usage. In particular, data capture for automated usability evaluation involves using software that automatically records usability data. Google Analytics for Mobile Apps (GAMA) offers a cloud service for keeping track of user actions in mobile apps. This tool was designed for marketing purposes, but is also used to measure UX characteristics. Automated solutions provide a manageable solution for the problem of changing contexts of use and user diversity. In addition, we can use data mining techniques to perform the analysis of the gathered data and provide the results to usability experts for future critique. The main issue is that none of the existing methods offers a comprehensive solution to extend mobile analytics to support automated usability evaluation, including a strong HCI basis that ensures the success of the usability related project goals. We propose to instrument the app code with calls to the Google Analytics logging service and apply data mining to usage log analysis in order to identify possible usability problems. The proposed approach is based on user and task analysis. This is the basis for instrumenting the user interface (UI). Nowadays, as we have mentioned in the previous section, there is no toolkit, framework or theoretical model able to, without depending on the application domain, log data from an app taking into account the user actions, analyze the gathered information and critique the results to improve them for feature releases.Our proposal does not preclude the application of alternative usability techniques, such as heuristic evaluation, cognitive walkthroughs or formal usability testing. The challenge that usability and UX poses in any mobile app development effort calls for the application of complementary usability techniques throughout the development process. As mobile apps and usability goals are variable, the aimed method needs to be applied to a variety of mobile app development projects, in order to customize the usability tracking approach for different domains. Despite the existence of some initial attempts to automatize usability evaluation of mobile applications, none of them fully integrates the definition of usability-relevant user tasks with the logging of usage data that can be later analyzed with data mining techniques to uncover usability problems. Our proposed solution will address both the theoretical approach to model user tasks with the aim of instrumenting application code for data logging the user events when undertaking such tasks, and the practical integration of such activities into a widely used IDE (Integrated Development Environment) to facilitate the adoption by software development organizations. In particular, by designing and implementing a plugin for Android Studio IDE in charge of carrying out the approach objectives. Thus, the resulting overall framework and tool will be the first solution to the problem of introducing usability evaluation in mobile application development with a continuous delivery paradigm, in a cost-effective way. The application of the proposed approach to an industry case study will offer a feasibility prove that will make the research results stand out compared with other partial solutions present in the current literature, with no experimental background to support them. In order to achieve our goal, we must overcome the following possible difficulties: The HCI field has a variety of methods offered for modeling user tasks, mainly used for manual assessment of user behavior by usability experts. A thorough analysis of this variety of methods will allow us to identify the most appropriate method for the purpose of the proposed research work: to serve as basis for the instrumentation of the application code to log user events in the undertaking of the modeled tasks. The identification of the specific user events to log in order to evaluate user behavior when undertaking tasks is not straightforward. The research team where this work will be carried out has already explored this issue and instrumented an application as case study. The results of such case study will be taken as basis for solving this problem in the proposed Master Thesis, generalizing them for any kind of mobile application.The expertise in usability by the Software Engineering Research Lab at UPM will provide the necessary background to ensure that usability is adequately integrated with the IDE. As a result of the proposed research work, we will obtain an automated usability evaluation method based on data mining for keeping track of actual usage of mobile applications. It will show how GAMA can be applied to identify possible usability problems experienced by app users. The solution will be focused on automated usability evaluation of usage under real-life conditions due to the changing contexts in which mobile apps are regularly used. The case study of application will prove the feasibility of the approach, suggesting future lines of research in the issue.

More information

Item ID: 50871
DC Identifier:
OAI Identifier:
Deposited by: Biblioteca Facultad de Informatica
Deposited on: 18 May 2018 11:31
Last Modified: 18 May 2018 11:31
  • Logo InvestigaM (UPM)
  • Logo GEOUP4
  • Logo Open Access
  • Open Access
  • Logo Sherpa/Romeo
    Check whether the anglo-saxon journal in which you have published an article allows you to also publish it under open access.
  • Logo Dulcinea
    Check whether the spanish journal in which you have published an article allows you to also publish it under open access.
  • Logo de Recolecta
  • Logo del Observatorio I+D+i UPM
  • Logo de OpenCourseWare UPM