When: 26/06/2018 16:00
Where: Frauhofer IGD, Fraunhoferstr. 5, Room 324
Who: Akshay Madhav Deshmukh (Author), Dipl.-Inf. Dirk Burkhardt (Coordinator/Co-Supervisor), Prof. Dr. Arjan Kuijper (Supervisor)
What: Master Thesis – “Automated User Evaluation Analysis for a Simplified and Continuous Software Development“
In today’s world, computers are tightly coupled with the internet and play a vital role in the development of business and various aspects of human lives. Hence, developing a quality user-computer interface has become a major challenge. Well-designed programs that are easily usable by users are moulded through a regress development life cycle. To ensure a user friendly interface, the interface has to be well designed and need to support smart interaction features. User interface can become an Archilles heel in a developed system because of the simple design mistakes which causes critical interaction problems which eventually leads to massive loss of attractiveness in the system. To overcome this problem, regular and consistent user evaluations have to be carried out to ensure the usability of the system.
The importance of an evaluation for the development of a system is well known. Most of the today’s existing approaches necessitate the users to carry out an evaluation in a laboratory. Evaluators are compelled to dedicate the time in informing the participants about the evaluation process and providing a clear understanding of the questionnaires during the experiment. At the post experiment phase, evaluators have to invest a huge amount of time in generating a result report. On the whole, most of the today’s existing evaluation approaches hogs up too much of time for most developments.
The main aim of this thesis is to develop an automated evaluation management and result analysis, based on a previous developed web-based evaluation system, which enables to elaborate the evaluation results and identify required changes on the developed system. The major idea is that an evaluation can be prepared once and repeated in regular time intervals with different user groups. The automated evaluation result analysis allows to easily check if the continued development lead to better results and if a bunch of given task could be better solved e.g. by added new functions or through enhanced presentation.
Within the scope of this work, Human-Computer Interaction (HCI) was researched, in particular towards User-Centered Design (UCD) and User Evaluation. Different approaches for an evaluation were researched in particular towards an evaluation through expert analysis and user participation. Existing evaluation strategies and solutions, inclined towards distributed evaluations in the form of practical as well as survey based evaluation methods were researched. A proof of concept of an automated evaluation result analysis that enables an easy detection of gaps and improvements in the system was implemented. Finally, the results of the research project Smarter Privacy were compared with the manual performed evaluation.