Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
3-2022
Abstract
User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g. applications, screens, view class, and other types of design objects) with multimodal (e.g. textual, visual) and positional (e.g. spatial location, sequence order and hierarchy level) attributes. We can therefore represent a set of application UIs as a heterogeneous network with multimodal and positional attributes. Such a network not only represents how users understand the visual layout of UIs, but also influences how users would interact with applications through these UIs. To model the UI semantics well for different UI annotation, search, and evaluation tasks, this paper proposes the novel Heterogeneous Attention-based Multimodal Positional (HAMP) graph neural network model. HAMP combines graph neural networks with the scaled dot-product attention used in transformers to learn the embeddings of heterogeneous nodes and associated multimodal and positional attributes in a unified manner. HAMP is evaluated with classification and regression tasks conducted on three distinct real-world datasets. Our experiments demonstrate that HAMP significantly out-performs other state-ofthe-art models on such tasks. We also report our ablation study results on HAMP.
Keywords
Graph neural networks, transformers, attention mechanism, heterogeneous networks, multimodal, mobile application user interface, supervised learning
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering; Software and Cyber-Physical Systems
Publication
Proceedings of the 27th Annual Conference on Intelligent User Interfaces, Virtual, 2022 March 22-25
First Page
433
Last Page
446
ISBN
9781450391443
Identifier
10.1145/3490099.3511143
Publisher
ACM
City or Country
New York
Citation
ANG, Gary and LIM, Ee-peng.
Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes. (2022). Proceedings of the 27th Annual Conference on Intelligent User Interfaces, Virtual, 2022 March 22-25. 433-446.
Available at: https://ink.library.smu.edu.sg/sis_research/6918
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3490099.3511143