Abstract—With millions of apps available to smartphone users, and the ability to potentially install hundreds or even thousands of apps on one’s mobile device, new interfaces and voice control integrations are helping to make accessing these apps easier. However, it is still a necessary task to visually organize one’s apps for easier future access. On Apple iPhones, the “Edit Home Screen” function allows users to visually arrange and organize installed applications on their home screen. This study analyzes the “Edit Home Screen” interface, the contexts in which users of a wide variety of backgrounds interact with it, and explores possible alternative designs that could improve its usability.
1 QUALITATIVE EVALUATION
For the qualitative evaluation, I will evaluate my card prototype using post-event protocols. The reason for choosing post-event protocols is due to them being able to limit the roles I would need to play simultaneously as an interviewer. Additionally, I do not want users to potentially be more deliberate about their actions than they would be in a real-world scenario, which might occur in a think-aloud session, for example.
For this evaluation, I will select 3 – 5 participants made up of friends and/or family. I will recruit them by messaging them via text, email, or in-person, and ask for their involvement based on their schedule. The evaluation will take place in their respective homes, in a room where there are no distractions. To record participants’ actions while using the prototype, video recording using my phone camera will be utilized.
For the post-event protocols, users will be asked to complete several different app organization tasks within the ‘Edit Home Screen’ interface that are mutually exclusive from one another, but ultimately change the overall task of organizing the iPhone home screen. These are:
- Entering into ‘Edit Home Screen’ mode by tapping and holding down on an app icon (see figure 7.1)
- Dragging app ‘A’ to swap its position with app ‘B’ (see figure 7.2)
- Tapping the undo button after making the previous change (figure 7.2 left)
- Tapping and holding an app icon while ‘Edit Home Screen’ mode is enabled, and creating a new folder (see figure 7.3)
- Tapping the ‘Select’ button, selecting 4 apps, and moving the 4 apps to another location within the page (figure 7.4)
- Tapping the ‘Select’ button, selecting 4 apps, and then tapping and holding those same apps to reveal a submenu (figure 7.4)
- Exiting from the 4 apps submenu, and canceling the selection of multiple apps (figure 7.4)
- Tapping the ‘…’ button to reveal a submenu with various app organization functions (figure 7.5 left)
As each participant completes the above tasks, they may need to verbalize their actions in real-time due to the nature of the prototype, and I will occasionally provide verbal feedback when new cards are shown based on the participant’s actions. I will purposely refrain from doing too much explaining.
The data that will be gathered are the participants’ likes and dislikes once they complete each of the above tasks. I will also ask them what they were thinking as they completed each task, or what was their goal when making specific actions. At the very end of the tasks, I will also ask for their level of satisfaction for each of the changes and added features, which include swapping app positions, the ‘Undo’ feature, selection of multiple apps, and the ‘…’ submenu. I will also ask what they expect should be in the ‘…’ submenu.
Based on defined requirements in the M2 assignment, prototypes must lead to “lower recorded instances of errors such as accidental page-turning, folder creation, misplacement, and exiting from the interface while dragging an app around.” Because this prototype is in its early stages, it will be difficult to know definitively if it will reduce the number of recorded errors. However, with swapping of app icons replacing the current way apps are moved to different locations within a page, “accidental folder creation” would be impossible. In terms of the data inventory, this evaluation would provide more clarity on novice iPhone users’ needs, their tasks, and their subtasks.
2 EMPIRICAL EVALUATION
2.1 Selection and Conditions
For empirical evaluation, I will be selecting my Wizard of Oz prototype, which utilizes Siri and voice control to organize the iPhone home screen.
What I wish to compare is the current ‘Edit Home Screen’ interface as it exists in iOS 14 and up, and my Siri-based, Wizard of Oz prototype. To be clear, both of these interfaces are quite complex, and for the sake of empirical evaluation, I can only compare very specific actions of which these two interfaces might share.
Therefore, the control condition for this empirical evaluation will be the act of moving an app between two other adjacent apps on the current, non-Siri ‘Edit Home Screen’ interface. The experimental condition will be the same act, but using the Wizard of Oz prototype. The two apps between which the primary app will be moved, will be the same for both treatments. For example, they will be the Safari and Photos apps in both treatments. The primary app to-be-moved will also be the same for both treatments – the Notes app for example. Finally, both treatments will take place in environments where there are no distractions.
The dependent variable that will be tested in this evaluation will be error rates. This refers directly to the requirements of the prototypes that were defined previously.
My null hypothesis for this evaluation will be that the error rates for both treatments will be equal or similar. In other words, there will not be a difference between the two treatments, and voice-control will have no effect on the error rate compared to the traditional method.. My alternative hypothesis will be that the error rates will be notably different, showing that there is a difference between the two treatments.
2.3 Design and Method
The experimental method will be between-subjects – one half of the participants will be randomly-assigned to the first treatment, and the other half to the second. For the participants assigned to the traditional ‘Edit Home Screen’ treatment, they will each use an iPhone that will be provided which will already have the ‘Edit Home Screen’ mode enabled with all apps arranged in a default order. They will be tasked with dragging the Notes app between the two adjacent apps, Safari and Photos. Captured error data will include any taps or actions that aren’t either tapping on the Notes app or moving it between Safari and Photos. In other words, interval data will be captured, indicating the number of errors each participant commits before completing the task. A maximum interval value of 3 will be allowed before a participant will be told to move on. This is to prevent all of the data from being skewed if a participant makes an error that might lead to many more errors; accidental creation of a folder while dragging the primary app, for example. This value will also correlate with the max value allowed in the second treatment.
The other group of participants assigned to the non-traditional, Siri-based treatment will be told that they must use Siri to accomplish their task. Each will be provided an iPhone that will have the ‘Edit Home Screen’ mode enabled. However, there will be a visible notification letting the user know that they can say things to Siri such as “Hey Siri, move Notes app between Safari and Photos.” Captured interval error data for this treatment will include any mispronunciations, misplacements of the Notes app, or any other action besides moving the Notes app between Safari and Photos. The max interval value for number of errors will be 3 per participant.
2.4 Analysis and Lurking Variables
To analyze the resulting interval data and check for differences between the two treatments, a Student’s t-test comparing the total numbers of errors, post-test averages, and standard deviations would be used.
Random assignment of participants into two halves, one for each of the two treatments, mitigates the potential for lurking variables. However, the differences between touch screen control, and voice-control provided by Siri still might introduce unforeseen variables.
3 PREDICTIVE EVALUATION
For the predictive evaluation, I will be performing a cognitive walkthrough of the verbal ‘Edit Home Screen’ prototype.
3.2 Description of Tasks
The specific tasks that will be addressed are subtasks of the overall task which is reorganizing the iPhone home screen, and they are:
- Dragging an app to a different page.
- Undoing a previous reorganization action
- Selecting multiple apps
- Moving multiple selected apps within a page
- Moving multiple selected apps into a folder
3.3 User Goals
Referring to the data inventory, the user’s overarching goal is to organize their iPhone home screen in a way that allows them to efficiently access their apps, and is pleasant for them. Within the ‘Edit Home Screen’ interface, the users’ goals become a bit more low-level, and reflect the previously mentioned subtasks. Operators that will be available to the user will include tapping the touch screen, tapping and holding, dragging their finger as they hold, and releasing their finger from the screen.
I will be evaluating a user’s navigation around the interface as they figure out how to accomplish the above goals. In other words, evaluating learnability will be emphasized.
4 PREPARING TO EXECUTE
For the next assignment, I will be executing the qualitative and predictive evaluations. The reason for not selecting the empirical evaluation is due to my Wizard of Oz prototype not being ready for empirical evaluation. It would be incredibly difficult and impractical to do an empirical evaluation for a prototype that requires synchronous contact with over 20 participants.