Publication Type

Journal Article

Version

acceptedVersion

Publication Date

12-2020

Abstract

Modeling the structure of culinary recipes is the core of recipe representation learning. Current approaches mostly focus on extracting the workflow graph from recipes based on text descriptions. Process images, which constitute an important part of cooking recipes, has rarely been investigated in recipe structure modeling. We study this recipe structure problem from a multi-modal learning perspective, by proposing a prerequisite tree to represent recipes with cooking images at a step-level granularity. We propose a simple-yet-effective two-stage framework to automatically construct the prerequisite tree for a recipe by (1) utilizing a trained classifier to detect pairwise prerequisite relations that fuses multi-modal features as input; then (2) applying different strategies (greedy method, maximum weight, and beam search) to build the tree structure. Experiments on the MM-ReS dataset demonstrates the advantages of introducing process images for recipe structure modeling. Also, compared with neural methods which require large numbers of training data, we show that our two-stage pipeline can achieve promising results using only 400 labeled prerequisite trees as training data.

Keywords

Feature extraction, Training, Task analysis, Semantics, Pipelines, Deep learning, Predictive models, Food recipes, cooking workflow, prerequisite trees, multi-modal fusion, cause-and-effect reasoning, deep learning

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

IEEE Transactions on Multimedia

Volume

23

First Page

4491

Last Page

4501

ISSN

1520-9210

Identifier

10.1109/TMM.2020.3042706

Publisher

Institute of Electrical and Electronics Engineers

Additional URL

https://doi.org/10.1109/TMM.2020.3042706

Share

COinS