Publication Type
Journal Article
Version
acceptedVersion
Publication Date
5-2022
Abstract
Semantically rich information from multiple modalities - text, code, images, categorical and numerical data - co-exist in the user interface (UI) design of mobile applications. Moreover, each UI design is composed of inter-linked UI entities which support different functions of an application, e.g., a UI screen comprising a UI taskbar, a menu and multiple button elements. Existing UI representation learning methods unfortunately are not designed to capture multi-modal and linkage structure between UI entities. To support effective search and recommendation applications over mobile UIs, we need UI representations that integrate latent semantics present in both multi-modal information and linkages between UI entities. In this article, we present a novel self-supervised model - Multi-modal Attention-based Attributed Network Embedding (MAAN) model. MAAN is designed to capture structural network information present within the linkages between UI entities, as well as multi-modal attributes of the UI entity nodes. Based on the variational autoencoder framework, MAAN learns semantically rich UI embeddings in a self-supervised manner by reconstructing the attributes of UI entities and the linkages between them. The generated embeddings can be applied to a variety of downstream tasks: predicting UI elements associated with UI screens, inferring missing UI screen and element attributes, predicting UI user ratings, and retrieving UIs. Extensive experiments, including user evaluations, conducted on datasets from RICO, a rich real-world mobile UI repository, demonstrate that MAAN out-performs other state-of-the-art models. The number of linkages between UI entities can provide further information on the role of different UI entities in UI designs. However, MAAN does not capture edge attributes. To extend and generalize MAAN to learn even richer UI embeddings, we further propose EMAAN to capture edge attributes. We conduct additional extensive experiments on EMAAN, which show that it improves the performance of MAAN and similarly out-performs state-of-the-art models.
Keywords
network embedding, mobile application user interface, unsupervised retrieval, selfsupervised learning, multi-modal, user interface design
Discipline
Databases and Information Systems | OS and Networks
Research Areas
Data Science and Engineering
Publication
ACM Transactions on Interactive Intelligent Systems
First Page
1
Last Page
29
ISSN
2160-6455
Identifier
10.1145/3533856
Publisher
Association for Computing Machinery (ACM)
Citation
ANG, Meng Kiat Gary and LIM, Ee-peng.
Learning semantically rich network-based multi-modal mobile user interface embeddings. (2022). ACM Transactions on Interactive Intelligent Systems. 1-29.
Available at: https://ink.library.smu.edu.sg/sis_research/7269
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org/10.1145/3533856