Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

6-2019

Abstract

Food computing is playing an increasingly important role in human daily life, and has found tremendous applications in guiding human behavior towards smart food consumption and healthy lifestyle. An important task under the food-computing umbrella is retrieval, which is particularly helpful for health related applications, where we are interested in retrieving important information about food (e.g., ingredients, nutrition, etc.). In this paper, we investigate an open research task of cross-modal retrieval between cooking recipes and food images, and propose a novel framework Adversarial Cross-Modal Embedding (ACME) to resolve the cross-modal retrieval task in food domains. Specifically, the goal is to learn a common embedding feature space between the two modalities, in which our approach consists of several novel ideas: (i) learning by using a new triplet loss scheme together with an effective sampling strategy, (ii) imposing modality alignment using an adversarial learning strategy, and (iii) imposing cross-modal translation consistency such that the embedding of one modality is able to recover some important information of corresponding instances in the other modality. ACME achieves the state-of-the-art performance on the benchmark Recipe1M dataset, validating the efficacy of the proposed technique.

Keywords

Big Data, Categorization, Large Scale Methods, Recognition, Detection, Retrieval

Discipline

Databases and Information Systems | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

2019 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR; Long Beach, June 16-20

First Page

11572

Last Page

11581

ISBN

9781728132938

Identifier

10.1109/CVPR.2019.01184

Publisher

IEEE

City or Country

Piscataway, NJ

Copyright Owner and License

Authors/LARC

Additional URL

https://doi.org/10.1109/CVPR.2019.01184

Share

COinS