Publication Type
Journal Article
Version
acceptedVersion
Publication Date
10-2017
Abstract
Recently, a tensor nuclear norm (TNN) based method [1] was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the lowrank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker (KKT) point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN [1] and matricization methods [2]–[5].
Keywords
Tensor Factorization, Tensor Completion, Lowrank Factorization
Discipline
Databases and Information Systems
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
IEEE Transactions on Image Processing
Volume
27
Issue
3
First Page
1152
Last Page
1163
ISSN
1057-7149
Identifier
10.1109/TIP.2017.2762595
Publisher
Institute of Electrical and Electronics Engineers
Citation
ZHOU, Pan; LU, Canyi; LIN, Zhouchen; and ZHANG, Chao.
Tensor factorization for low-rank tensor completion. (2017). IEEE Transactions on Image Processing. 27, (3), 1152-1163.
Available at: https://ink.library.smu.edu.sg/sis_research/9057
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TIP.2017.2762595